![]() indication of intra-prediction mode selection for video encoding using cabac
专利摘要:
INTRA-FORECAST MODE SELECTION INDICATION FOR VIDEO ENCODING USING CABAC For a video data block, a video encoder can signal to a video decoder, using a context-based adaptive binary arithmetic (CABAC) encoding process, an intra-prediction mode selected using a codeword that is mapped to a modified intra-forecast mode index. The video decoder can perform a context-based adaptive binary arithmetic (CABAC) encoding process to determine the codeword signaled by the video encoder, determine the modified intra-prediction mode index corresponding to the codeword, determine the most likely modes based on a context, map the modified intra-prediction mode index to an intra-prediction mode index by comparing the modified intra-prediction mode index with the most likely mode indices, and Determine the selected intra-prediction mode used to encode the video data block based on the intra-prediction mode index. 公开号:BR112013017423A2 申请号:R112013017423-4 申请日:2012-01-05 公开日:2020-09-01 发明作者:Marta Karczewicz;Xianglin Wang;Wei-Jung Chien 申请人:Qualcomm Incorporated; IPC主号:
专利说明:
"INTRA-FORECAST MODE SELECTION INDICATION FOR VIDEO CODING USING CABAC" Cross Reference to Related Orders This application claims the benefit of the U.S. provisional 5 patent application No. 61 / 430,520, filed on January 6, 2011; of the U.S. provisional application No. 61 / 446,402, filed February 24, 2011, and the U.S. provisional application No. 61 / 448,623, filed on March 2, <. 2011, all the content of each one being incorporated here- by 10 references. Field of the Invention This description refers to video encoding, and more particularly to signaling the encoding characteristics for encoded video data. 15 Description of the Prior Art Digital video capabilities can be incorporated into a wide variety of devices, including digital televisions, digital direct broadcast systems, wireless broadcast systems, digital assistants 20 people (PDAS), Iapt computers: op and desktop, digital cameras, digital recording devices, digital media devices, video mode devices, video game consoles, radio cell phones or satellite phones, video teleconferencing devices , and the like. The 25 digital video devices implement video compression techniques, such as those described in the standards defined by the MPEG-2, MPEG-2, ITU-T H.263 and ITU-T H.264 / MPEG-4, Part 10, Advanced Video Encoding (AVC), and extensions of such standards, to transmit and receive digital video information more efficiently. m Video compression techniques perform spatial and / or temporal forecasting to reduce or remove the redundancy inherent in video sequences. For block-based video encoding, a video frame or slice can be divided into video blocks. Each video block can be further divided. The video blocks in an intra-coded frame or slice (I) are 5 encoded using spatial prediction with respect to neighboring video blocks. Video blocks in an inter-encoded frame or slice (P or B) can use spatial prediction with respect to neighboring macroblocks or coding units in the same frame or slice or time prediction with respect to other reference frames. Summary of the Invention In general, this description describes techniques for signaling encoding characteristics for encoded video data. The techniques described therein can »improve signaling efficiency in an intra-prediction way used to encode a block of video data. The techniques of this description include signaling in coded bitstream intra-prediction modes for video data blocks using code words. The techniques additionally include coding codewords using a context adaptive binary arithmetic (CABAC) coding process - In this way, there was a lot of relative bit savings for an encoded bit stream when techniques of that description are used. In one example, a method of decoding video data includes determining a first most likely intra-prediction mode and a second most likely intra-prediction mode for an encoded block of video data based on a context for the video. current block; selecting a codeword table based on the context for the current block, where the codeword table comprises a plurality of codewords corresponding to the modified intra-prediction mode indices that correspond to the mills. d.e intra-forecast in addition to the first most likely intra-forecast mode and the second most likely intra-forecast mode; performing a CABAC process to determine a received code word 5; determining one of the modified intra-prediction mode indices that correspond to the received code word using the code word table; the selection of an intra-forecast mode in addition to the most likely first intra-forecast mode and the second intra-forecast mode The most likely prediction to be used to decode the coded block, in that the selected intra-prediction mode corresponds to the index determined among the modified intra-prediction mode indices; and decoding the current block using the intra-mode selected forecast. In one example, an apparatus for decoding video data includes a video decoder configured to determine a more likely first intra-prediction mode and a more likely second intra-prediction mode for an encoded block of video data3 based on a context for the current block; the selection of a table of code words based on the context for the current block, in which the table of code words comprises a plurality of code words corresponding to the modified intra-forecast mode indexes corresponding to the modes of intra-forecast in addition to the first most likely intra-forecast mode and the second most likely intra-forecast mode; conducting a CABAC process to determine a received code word; determination of one of the modified intra-prediction mode indexes that corresponds to the received code word using the code word table; the selection of an intra-prediction mode in addition to the first most probable intra-prediction mode and the second most likely intra-prediction mode for use to decode the coded block, where the selected intra-prediction mode corresponds to the given index among the modified intra-forecast mode indices; and 5 decoding the current block using the selected intra-prediction mode. In one example, a method of encoding video data includes determining a first most likely intra-prediction mode and a second most likely intra-prediction mode for a current block of video data based on a video context. encoding for the current block; selecting a table of code words based on the context for 'the current block, where the table of code words comprises a plurality of code words corresponding to the modified intra-prediction mode indexes corresponding to the intra modes -prediction in addition to the first most likely intra-forecast mode and the second most likely intra-forecast mode; the encoding of the current block using one of the intra-prediction modes in addition to the first most likely intra-prediction mode and the second most likely intra-prediction mode; the "determination of one of the modified intra-prediction mode indexes that corresponds to one of the intra-prediction modes using the codeword table; and the coding of a codeword from the selected codeword table by realization of a CABAC process, in which the code word corresponds to one of the modified intra-prediction mode indexes. In one example, an apparatus for encoding video data includes a video encoder configured to determine a most likely first intraspection mode and a second most likely intraspection mode for a current block of video data based on an encoding context for the current block; selecting a table of code words based on the context for the current block, where the table of code words comprises a plurality of code words corresponding to the modified intra-prediction mode indexes 5 corresponding to the intra-prediction modes -prediction in addition to the first most likely intra-forecast mode and the second most likely intra-forecast mode; coding the current block using one of the intra-forecast modes in addition to the first most likely intra-forecast mode and the second most likely intra-forecast mode; the determination of one of the modified intra-forecast mode indexes that corresponds to one of the intra-forecast modes using the code-word table; and the coding of a code word from table 15 selected from code words by carrying out a CABAC process, in which the code word corresponds to one of the modified intra-prediction mode indexes. In one example, a video decoding apparatus includes means of determining a most likely first intra-prediction mode and a second most likely intra-prediction mode for an encoded block of video data based on a context for the video. current block; means of selecting a table of code words based on the context for the current block, in which the table of code words comprises a plurality of code words corresponding to the modified intra-prediction mode indexes corresponding to the modes intra-forecast in addition to the first most likely intra-forecast mode and the second most likely intra-forecast mode; means for carrying out a CABAC process to determine a received code word; means for determining one of the modified intra-forecast mode indexes that corresponds to the code word received using the code word table; means for selecting an intra-prediction mode in addition to the most likely first intra-prediction mode and the second most likely intra-prediction mode for use in decoding the encoded block, it appears that the selected intra-prediction mode corresponds to an index determined among the modified intra-forecast mode indexes; and means for decoding the current block using the selected intra-prediction mode. In one example, an apparatus for encoding video data includes means of determining a first most likely intra-prediction mode and a second most likely intra-prediction mode for a current block of video data based on an encoding context for the current block; means for selecting a table of code words based on the context for the current block, in which the table of code words comprises a plurality of code words corresponding to the modified intro-forecast mode indexes corresponding to the modes of intra-forecast in addition to the first most likely intra-forecast mode and the second most likely intra-forecast mode; means for determining one of the modified intra-prediction mode indices that corresponds to one of the intra-prediction modes using the code word table; and means for encoding a code word from the selected code word table by carrying out a CABAC process, in which the code word corresponds to one of the modified intra-prediction mode indexes. In one example, a computer-readable storage medium having stored in the same instructions as when executed causes one or more processors to determine a most likely first and one intra-prediction mode. second most likely intra-prediction mode for an encoded block of video data based on a context for the current block; selecting a table of code words based on the context for the current block, where the table of code words comprises a plurality of code words corresponding to the modified intra-prediction mode indexes corresponding to the intra nodes -prediction in addition to the first most likely intra-forecast mode and the second most likely intra-forecast mode; conducting a CABAC process to determine a received code word; determining one of the modified intra-prediction mode indexes that corresponds to the code word received using the code word table; the selection of an intra-prediction mode in addition to the first most likely intra-prediction mode and the second most likely intra-prediction mode to use for decoding the coded block, where the selected intra-prediction mode corresponds to an index determined among the modified intra-forecast mode indices; and decoding the current block using the selected intra-prediction mode. In one example, a computer-readable storage medium having instructions stored on it in the same way as when executed cause one or more processors to determine a more likely first intra-forecast mode and a more likely second intra-forecast mode for a current block video data based on an encoding context for the current block; selecting a »code word table based on the context for the current block, where the code word table comprises a plurality of code words corresponding to the modified intra-prediction mode indexes corresponding to the intra-prediction modes in addition, the first most likely intra-forecast mode and the second most likely intra-forecast mode; the encoding of the current block using one of the intra-prediction modes in addition to the first most likely intra-prediction mode and the second most likely intra-prediction mode; determining one of the most likely intra-prediction mode indices that corresponds to one of the intra-prediction modes using the code word table; and the coding of a code word from the selected table of code words by performing a CABAC process, in which the code word corresponds to one of the modified intra-prediction mode indexes. In one example, a method of decoding video data includes determining a more likely first intraspection mode and a more likely second intraspection mode for a current block of video data based on a context for the current block; selecting a table of code words based on the context for the current block, where the table of code words comprises a plurality of code words corresponding to the code word indexes, where the code word indexes are mapped to intra-forecast modes; conducting a CABAC process to determine a received code word; determining a modified code word index that corresponds to the code word received using the code word table; the selection of an intra-prediction mode in addition to the first most likely intra-prediction mode and the second most likely intra-prediction mode for use in decoding the coded block, where the selected intra-prediction mode corresponds to an index selected codeword based on the modified codeword index, the first most likely intra-forecast mode, and the second most likely intra-forecast mode; and decoding the current block using the selected intra-prediction mode. In one example, an apparatus for decoding video data includes a configured video decoder to determine a more likely first intraspection mode and a more likely second intraspection mode for a current block of video data with based on a context for the current block; selecting a table of code words based on the context for the current block, where the table of code words comprises a plurality of code words corresponding to a code word index, where the code word indexes they are mappings for intra-forecasting modes; conducting a CABAC process to determine a received code word; determining a modified codeword index that corresponds to the codeword received using the codeword table; the selection of an intra-prediction mode in addition to the first most likely intra-prediction mode and the second most likely intra-prediction mode for use in decoding the coded block, where the selected intra-prediction mode corresponds to an index selected codeword based on the modified codeword index, the most likely first intra-forecast mode and the second most likely intra-forecast mode; and decoding the current block using the selected intra-forecast mode. In an example, a video decoding device includes means for determining a more likely first intraspection mode and a more likely second intraspection mode for a current block of video data based on a context for the video. current block; means for selecting a code word table based on the context for the current block, in which the code word table comprises a plurality of code words corresponding to the code word indexes, in which the code word indexes code are mapped to intra-forecast modes; means for carrying out a CABAC process for determining a received code word; means for determining a modified codeword index 5, which corresponds to the codeword received using the codeword table; means for selecting an intra-prediction mode in addition to the first most likely intra-prediction mode and the second most likely intra-prediction mode for use in decoding the coded block, where the selected intra-prediction mode corresponds to a codeword index selected based on the modified codeword index, the first most likely intra-forecast mode and the second most likely intra-forecast mode; and means for decoding the current block using the selected 'intra-prediction' mode. In one example, a computer-readable storage medium having instructions stored in it that, when executed, cause one or more processors to determine a more likely first intra-forecast mode and a more likely second intra-forecast mode for a current block of video data based on a context for the current block; selecting a table of code words based on the context for the current block, where the table of code words comprises a plurality of code words corresponding to the code word indexes, where the code word indexes are mapped to intra-forecast modes; conducting a CABAC process to determine a received code word; determining a modified codeword index that corresponds to the codeword received using the codeword table; the selection of an intra-prediction mode in addition to the most likely first intra-prediction mode and a second most likely intra-prediction mode for use in decoding the encoded block, where the selected intra-prediction mode corresponds to an index codeword selected based on the index of 5 coded codeword, the first most likely intra-forecast mode and the second most likely intra-forecast mode; and decoding the current block using the selected intra-prediction mode. In an example, a method of encoding video data includes. determining a most likely first irreconstitution mode and a second most likely intraspection mode for a current block of video data based on an encoding context for the current block; the selection of a table of code words based on the context for the current block, where the table of code words comprises a plurality of code words corresponding to the code word indexes, where the code word indexes are mapped to intra-forecast modes; the encoding of the current block using uw of the intra-forecast modes in addition to the most likely first intra-forecast mode and the second most likely intra-forecast mode; determining a modified codeword index based on the codeword index of one of the intra-prediction modes used to encode the current block, a codeword index mapped to the first most likely mode, and an index codeword mapped to the second most likely mode; and the coding of a code word from the selected table of code words by performing a CABAC process, in which the code word corresponds to one of the modified intra-prediction mode indexes. In one example, an apparatus for encoding video data includes a video encoder configured to determine a most likely first intraspection mode and a second most likely intraspection mode for a current block of video data based on an encoding context for the current block; the selection of 5 a table of code words based on the context for the current block, in which the table of code words comprises a plurality of code words corresponding to the code word indexes, in which the word indexes code are mapped to intra-forecast modes; the encoding of the current block using one of the intra-prediction modes in addition to the first most likely intra-prediction mode and the second most likely intra-prediction mode, the determination of a modified codeword index based on the word index -code of one of the intra-prediction modes used to encode the current block, a codeword index mapped to the first most likely mode, and a codeword index mapped to the second most likely mode; and the coding of a code word from the selected table of code words by carrying out a CABAC process, in which the code word corresponds to one of the modified intra-prediction mode indexes. In one example, a video encoding apparatus includes means for determining a most likely first intraspection mode and a second most likely intraspection mode for a current block of video data based on an encoding context for the current block; means of selecting a table of code words based on the context for the current block, where the table of code words comprises a plurality of code words corresponding to the code word indexes, where the code word indexes are mapped to intra-forecast modes; means for encoding the current block 13 / 1.15 using one of the intra-forecast modes in addition to the first most likely intra-forecast mode and the second most likely intra-forecast mode; means for determining a modified code word index based on code word index 5 of one of the intra-prediction modes used to encode the current block, a code word index mapped to the first most likely mode, and a codeword index mapped to the second most likely mode; and means for coding a code word from the selected code word table by carrying out a CABAC process, in which the code word corresponds to one of the modified intra-prediction mode indexes. In one example, a computer-readable storage medium having instructions stored in it that, when executed, cause one or more processors to determine a more likely first intra-prediction mode and a more likely second intraspection mode for a current block of video data based on an encoding context for the current block; the selection of a table of code words based on the context for the "current block, in which the table of code words comprises a pIurality of code words corresponding to the code word indexes, in which the word indexes - codes are mapped to the intra prediction modes; the current block encoding using one of the " modes intra-forecast in addition to the first most likely intra-forecast mode and the second most likely intra-forecast mode; determining a modified codeword index based on the codeword index of one of the intra-prediction modes used to encode the current block, a ma codeword index, used for the first most likely mode, and a codeword index mapped to the second most likely mode; and the coding of a code word from the selected code word table by carrying out a CABAC process, in which the code word corresponds to one of the modified intra-5 mode indexes. Details of one or more examples are shown in the attached drawings and in the description below. Other characteristics, objectives, and advantages will be apparent from the description and drawings and from the> claims. Brief Description of the Drawings Figure 1 is a block diagram illustrating an illustrative video encoding and decoding system that can use techniques for encoding syntax data representative of intra-prediction modes for video data blocks; Figure 2 is a block diagram illustrating an example of a video encoder that can implement techniques for coding indicative information in an intra-prediction mode; Figure 3 illustrates an example of intra-prediction modes and corresponding mode indices; Figure 4 is a block diagram illustrating an example of a video decoder, which decodes an encoded video sequence; Figure 5A is a block diagram illustrating an example of a context-based adaptive binary arithmetic coding unit that can be used according to the techniques described in that description; Figure 5B is a block diagram illustrating an example of a context-based adaptive binary arithmetic decoding unit that can be used according to the techniques described in that description; Figure 6 is a flow chart illustrating an illustrative method for the intra-prediction coding of a block . video data; Figures 7A and 7B are flowcharts illustrating 5 illustrative methods for selecting a code word indicative of an intra-prediction mode for a coded block; Figure 8 is a flow chart illustrating an illustrative method for the intra-prediction decoding of a video data block; Figures 9A and 8B are flowcharts illustrating illustrative methods for determining an intra-prediction mode for a block using a received code word indicative of the intra-prediction mode for a coded block; Figure 10 is a conceptual diagram illustrating a configured example of configuration data, which indicates the relationships between an intra-prediction mode index table, a modified intra-prediction mode index table, and context data. Detailed Description of the Invention In general, this description describes techniques for signaling encoding characteristics for encoded video data, and more particularly, this description describes the use of a context-based binary arithmetic encoding process (CABAC) to signal modes intra-forecasting for a video decoder. The techniques of this description can improve the efficiency for signaling -> in an intra-prediction mode used to intra-encode a block of video data. A video encoder, for example, can include configuration data that indicates the indices for the intra-prediction modes based on the coding contexts for the blocks encoded using the various intra-prediction modes. Encoding contexts may include, for example, encoding modes for previously neighboring encoded blocks and / or block sizes. 5 The coding data can be used to define a most likely intraspection mode for each context or can define two or more most likely intraspection modes for each context. These most likely modes of intra-forecasting can sometimes in this description be referred to simply as the most likely modes. The configuration data can also define a rapping table for use in encoding syntax data describing the intra-prediction mode for modes beyond the most likely modes in a given context. In particular, the mapping table can include a mapping of indexes to codewords. As will be described in more detail below, the mapping table can map modified intra-prediction mode indices to codewords, or they can map the intra-prediction mode indices to codeword indices which are then adjusted into indices. modified codeword. Accordingly, the video encoder can be configured to determine a matching context for a block to be encoded by the intra-prediction mode. The coding context can be related to a more likely intra-prediction mode, in addition to probabilities of other intra-prediction modes. When the most likely intraspection mode is selected for use in encoding and a current block, the video encoder can select a one-bit code word (for example, '1') to indicate that the block is encoded in mode most likely for the context in which the block occurs. In cases where more than one most likely intra-prediction mode is used, a first bit can indicate whether one of the most likely intra-prediction modes is selected for use in encoding a current block, and if one of the intra-prediction modes - most likely prediction is used, then a second 5 bit (or series of bi "ts) can indicate which of the most likely intra-prediction modes is selected. In the moments throughout this description, the combination of that first bit and whether a second bit can be referred to as a code word, with the first bit of the code word signaling that a selected intra-prediction mode is one of the most likely intra-prediction modes, and the second bit (or series of bits) identifying which of the most likely intra-prediction modes.r According to the techniques of this description, a code word indicating whether a selected mode is a more likely mode and which of the most likely modes the selected mode is can be coded using a process CABAC as described in that description. In addition, in some cases, the bits used for signaling the most likely modes in conjunction with a code word identifying a mode not most likely can be treated together as a code word and encoded using a CABAC process as described in that description. Each of the other intra-prediction modes (that is, the intra-prediction modes in addition to the intra-prediction modes most likely prediction) can also receive a modified index value, based on the coding context. In addition, the coding context may additionally correspond to a table with a set of code words indexed by the index values related to the indexes for the modes of intra-prediction. In particular, as discussed above, the index value for the most likely non-predictive modes does not need to receive another code word, in addition to a single-bit (or possibly larger) word representing the most likely intra-forecast mode. was selected. To map a codeword to each remaining intraspection mode, the index of each remaining intraspection mode 5 can be modified first to exclude those most preferred modes originally allocated. Accordingly, the modified intra-prediction mode indices may be the same as the intra-prediction mode indices that are lower than the mode index for the most likely mode. On the other hand, when a more likely mode is used, the modified intra-prediction mode indices may be once less than the intra-prediction mode indices for the intra-prediction niode indices that are greater than the index to the most likely mode. In this way, there may be a smaller code word than the intra-prediction modes, and the code words can be mapped to the intra-prediction modes based on the coding context. When more than one of the most likely intra-prediction modes is used, there may be two or more smaller code words in the code than intra-prediction modes, and code words can likewise be mapped to intra-prediction modes based on the coding context. The code word can be encoded using a CABAC process. A video decoder can be configured in a similar way, for example, to perform similar techniques when an intra-prediction mode is determined for an encrypted block. According to the techniques of that description, a video decoder can receive data for an encoded block, in addition to a code word indicating an intra-prediction mode for use in decoding the encoded block. The video decoder can receive and decode the code word by carrying out a CABAC process which is generally the alternation of the CABAC process performed by the video encoder. The video decoder can determine a context for the block in a similar way to a video encoder. Based on the context, the video decoder can determine a most likely intra-prediction mode or modes for the block. When a most likely intra-prediction mode is used, a single bit can be decoded to determine whether the most likely mode is selected. IF the single bit indicates that the most likely mode is selected, the video decoder can decode the block using the most likely intraspection mode. Otherwise, the video decoder may refer to the modified intra-prediction mode index mapped to the received code word. If the modified intra-prediction mode index is greater than or equal to the mode index for the most likely intra-prediction mode, the video decoder can decode the block using the intra-prediction mode mapped to a mode index which is once higher than the modified intra-forecast mode index. If the modified intra-prediction mode index is less than the mode index for the most likely intra-prediction mode, the video decoder can decode the block using the intra-prediction mode mapped to an index so that it is equal to the modified intra-forecast mode index. Similarly, when two most likely intra-prediction coding modes are used, if the first bit or series of bits indicates that the selected intra-prediction mode is one of the most likely intra-prediction modes, then the decoder can decode the block using the most likely intraspection mode identified by a second bit. Otherwise, the video decoder may refer to the . modified intra-forecast mode index mapped to received code word. If the modified intraspection mode index is less than the mode index for the most likely first intraspection mode, the video decoder can decode the block using the intraspection mode mapped to a mode index. which is equal to the modified intra-forecast mode index. Otherwise, if the modified intra-prediction mode index plus one is less than the mode index for the second most likely intra-prediction modQ, then the video decoder can decode the block using the mapped intra-prediction mode for a mode index that is once higher than the modified intra-forecast mode index. Otherwise, the video decoder can decode the block using the intra-prediction mode mapped to a motion index that is twice as high as the modified intra-prediction mode index, and so on. The phrases "first most likely" and "second most likely" are generally used in this description to refer to two separate most likely intra-forecast modes, and should not imply a relative similarity between two intra-forecast modes. As will be explained later through examples, however, it can generally be considered that, for purposes of explanation in this description, the first most likely intra-prediction mode has a correspondingly lower index value than c) second intra-prediction mode. most likely forecast. Thus, if a modified intra-prediction mode index value is considered to be less than the mode index for a more likely first mode, it can be assumed that the modified intra-prediction mode index value is also less than the mode index for a second most likely intra-forecast mode, third most likely intra-forecast mode, and so on. The techniques in this description can be extended> to implementations that use more than two most likely intraspection modes. For example, considering that there are more likely N intra-prediction modes, a first bit or series of bits may indicate whether the selected intra-prediction mode is one of the n most likely intraspection modes. If the selected intra-prediction mode is one of the N most likely intra-prediction modes, then a second set of bits can identify which of the N most likely intra-prediction modes is the selected intra-prediction mode. For example, using three most likely modes, two bits can be used to signal whether the selected intra-prediction mode is one of the most likely modes as follows: signal a "00" to indicate that the first most likely mode is used; signal a "01" to indicate that the second most likely mode is used; signal a "10" to indicate that the third most likely mode is used. If none of the most likely modes are used, an additional codeword can be used to signal the selected intra-forecast mode. In some cases, the most likely intra-prediction inodes can be flagged in one or more groups, where a first bit or series of bits signals whether the most likely intra-prediction mode is from a first group. If the selected intra-prediction mode is not from a first group, then subsequent bits can signal whether it is from a second group, and so on. If, for example, five most likely modes are used, then a first bit or series of bits should signal whether the selected intra-prediction mode is from a first group of two most likely intra-prediction modes. If the selected mode is one of the two, then a second bit. can identify which of the two is the selected mode. If the selected mode is not one of the two, 5 then a second group of bits can identify the selected mode. If, for example, the second group of bits includes two bits, then a first combination of bits (for example, 00) can indicate that the selected mode is a%. third most likely mode, a second combination of bits b (for example, 01) may indicate that the selected mode is a fourth most likely mode, and a third combination of bits (for example, 10) may indicate that the selected mode is a fifth most likely way. If the selected mode is one of the five most likely intra-prediction modes, then the decoder can decode the block using the most likely mode. A fourth bit combination (for example, 11) may indicate that q selected mode is not one of the five most likely modes, in which case the fourth bit combination can be followed by subsequent bits identifying the selected mode according to the techniques described in that description. In cases where a selected mode is not a more likely mode, the video decoder can refer to the modified intra-prediction mode index mapped to the received code word. For purposes of example, it can be assumed that a first most likely intra-prediction mode has a correspondingly lower index value than a second most likely intra-prediction mode, and the second has a lower index than the third and so for dante. According to the techniques of that description, if the modified intra-prediction mode index is less than the mode index for the most likely first intra-prediction mode, the video decoder can decode the block using the intra-prediction mode. forecast mapped to a mode index that is the same as the modified intra-forecast mode index. Otherwise, if the modified intra-prediction mode index plus a 5 is less than the mode index for the second most likely intra-prediction mode, then the video decoder can decode the block using the intra-prediction mode mapped to a mode index that is once larger than the modified intra-forecast mode index. Otherwise, if the modified intra-prediction mode index plus two is less than the mode index for the third most likely intra-prediction mode, then the video decoder can decode the block using the intra-prediction mode mapped to a mode index that is twice as large as the modified intra-forecast mode index, and so on. As will be explained in more detail below, the modified intra-prediction mode index may not include records for the most likely modes, which is because the intra-prediction mode index can be mapped to the intra-prediction mode index. modified forecast plus one, q modified intra-forecast mode index plus two, etc., depending on the mode index of the most likely modes. Figure 1 is a block diagram illustrating an illustrative video encoding and decoding system 10 that can use techniques for encoding syntax data representative of the intra-prediction modes for the video data blocks. As illustrated in figure 1, system 10 includes a source device 12 that transmits encoded video to a destination device 14 over a communication channel 16. The source device 12 and the destination device 14 can comprise any of a wide range of devices. In some cases, the source device 12 and the destination device 14 may comprise wireless communication devices, such as wireless devices, called cellular or satellite radio or any wireless devices that can communicate video information via a communication channel 16, in which case communication channel 16 is wireless. The techniques of this description, however, which refer to the encoding of syntax data representative of the intra-prediction modes for the video data blocks, are not necessarily limited to wireless applications or configurations. For example, these techniques may apply to aerial television broadcasts, cable television broadcasts, satellite television broadcasts, Internet video broadcasts, encoded digital video that is encoded in a storage medium, and other situations. Accordingly, communication channel 16 can comprise any combination of wireless or wired media suitable for transmitting encoded video data. Furthermore, the communication channel 16 should represent only one of the many ways in which a video encoding device can transmit data to a video decoding device. For example, in other system configurations 10, the source device 12 must generate encoded video for decoding by the target device 14 and store the encoded video on a storage medium or a file server, so that the encoded video can be accessed by the target device 14 as desired. In the example in Figure 1, the source device 12 includes a video source 18, video encoder 20, a modulator / demodulator (modem) 22 and a transmitter 24. The target device 14 includes a receiver 26, a modem 28, a video decoder 30 and a display device 32. According to this description, the video encoder 20 of the source device 12 can be configured to apply the techniques for encoding syntax data 5 representative of the intra-prediction modes for the video data blocks. In other examples, a source device and a target device can include other components or arrangements. For example, source device 12 can receive video data from an external video source 18, such as an external camera. Likewise, target device 14 can interface with an external display device, instead of including an integrated display device. The illustrated system 10 of figure 1 is merely an example. Techniques for encoding syntax data representative of the intra-prediction modes for video data blocks can be performed by any digital video encoding and / or decoding device. Although the techniques of this description are generally performed by a video encoding device, the techniques can also be performed by a video encoder / decoder, typically referred to as a "CODEC". In addition, the techniques of this description can also be performed by a video processor. The source device 12 and the target device 14 are merely examples of such encoding devices in which the source device 12 generates encoded video data for transmission to the target device 14. In some examples, devices 12, 14 include components video encoding and decoding. In this way, system 10 can support one-way or two-way video transmission between video devices 12, 14, for example, for video sequencing, video playback, video broadcasting or video telephony. The video source 18 of the source device 12 may include a video capture device, such as a video camera, a video file containing previously captured video and / or a video feed from a -> content provider and video. As an alternative, video source 18 can generate data based on computer graphics such as the source video, or a combination of live video, archived video, and computer generated video. In some cases, if the video source 18 is a video camera, the source device 12 and the target device 14 can form so-called camera phones or video phones. As mentioned above, however, the techniques described in this description may be applicable to video encoding in general, and can be applied to wireless and / or wired applications. In each case, the captured, pre-captured, or computer-generated video can be encoded by video encoder 20. The encoded video information can then be modulated by modem 22 according to a communication standard, and transmitted to the device destination 14 via transmitter 24, Modem 22 may include various mixers, filters, amplifiers or other components designed for signal modulation. Transmitter 24 may include circuits designed for data transmission, including amplifiers, filters and one or more antennas. The receiver 26 of the destination device 14 receives information via channel 6, and the modem 28 demodulates the information. Again, the video encoding process can implement one or more of the techniques described here for encoding syntax data representative of intra-prediction modes for blocks of video data. Information communicated through channel 16 can include syntax information defined by video encoder 20, which is also used by video decoder 30, which includes syntax elements 5 that describe the characteristics and / or processing of macro blocks and other coded units, for example, GOPs. The display device 32 displays the decoded video data for a user, and can comprise any of a variety of display devices such as a cathode ray tube (CRT), a liquid crystal display (LCD), a video monitor. plasma, an organic light-emitting diode (OLED) monitor, or other type of display device. In the example of figure 1, the communication channel 16 3 can comprise any wireless or wired communication medium, such as a radio frequency (RF) spectrum or one or more physical transmission lines, or any combination of media without or wired. The communication channel 16 can form part of a packet-based network, such as a local area network, a wide area network, or a global network such as the Internet. Co-communication channel 16> generally represents any suitable media, or collection of different media, for transmitting video data from source device 12 to destination device 14, including any suitable combination of wired or wireless media. Communication channel 16 can include routers, switches, base stations, or any other equipment that may be useful to facilitate communication from source device 12 to destination device 14. Video encoder 20 and video decoder 30 can operate according to a video compression standard, such as the ITU-T H.264 standard, alternatively referred to as MPEG-4, part 10, Advanced Video Encoding (AVC). The techniques of this description, however, are not limited to any particular coding standard. Other examples include MPEG-2 and rTU-T H.263. 5 Although not shown in figure 1, in some respects, the video encoder 20 and video decoder 30 can each be integrated with an audio encoder and decoder, and may include suitable MUX-DEMUX units, or other hardware and software, to handle the encoding of both audio and video in a common data stream or separate data streams. If applicable, MUX-DEMUX units can conform to the ITU H.223 multiplexer protocol, or other protocols such as the user datagram protocol (UDP). The ITU-T H.264 / MPEG-4 (AVC) standard was formulated by the ITU-T Video Coding Specialist Group (VCEG) together with the ISSO / IEC Moving Image Specialist Group (MPEG) with the product of a collective partner known as the Joint Video Team (jvt). In some respects, the techniques described in this description can be applied to devices that generally conform to the H.264 standard. The H.264 standard is described in ITU-T Recommendation H.264, Advanced Video Encoding for generic audiovisual services, by the ITU-T Study Group, and dated March 2005, which can be referred to here as the H.264 standard or H.264 specification, or the H.264 / AVC standard or specification. The Joint Video Team (JVT) continues to work on the extension to H.264 / MPEG-4 AVC. The video encoder 20 and video decoder 30 may each be implemented as any one of a variety of suitable encoder circuit sets, such as one or more microprocessors, digital signal processors (DSPS), integrated circuits application-specific (ASICs), field programmable port sets (FPGAs), discrete logic, software, hardware, firmware or any 5 combinations thereof. Each of the video encoder 20 and the video decoder 30 can be included in one or more encoders or decoders, any of which can be integrated as part of one. combined encoder / decoder (CODEC) on a camera, computer, mobile device, subscriber device, broadcast device, respective decoder box, or the like. A video stream typically includes a series of video frames. A group of images (GOP) generally comprises a series of one or more video frames. A GOP can include syntax data in a GOP header, a header from one or more GOP frames, or elsewhere, that describes a number of frames included in the GOP. Each frame can include frame syntax data that describes an encoding mode for the respective frame. The video encoder 20 typically operates on video blocks within the individual video frames in order to encode the video data. A video block can correspond to a macro block or to a partition of a macro block. Video blocks can have fixed or variable sizes, and can differ in size according to a specified coding standard.3 Each video frame can include a plurality of slices. Each slice can include a plurality of macro blocks, which can be arranged within the partition, also referred to as sub-blocks. As an example, the ITU-T H.264 standard supports intra-prediction in various block sizes, such as 16 x 16, 8 x 8, 4 x 4 for luminescence components and 8 x 8 for chroma components, in addition to inter-forecasting various blocp sizes, such as 16 x 16, 16 x 8, 8 x 16, 8 x 8 , 8 x 4, 4 x 8 and 4 x 4 for illumination components and 5 corresponding step sizes for chroma components. In this description, "NXN" and "N by N" can be used interchangeably to refer to the pixel dimensions of the block in terms of vertical- and horizontal dimensions, for example, 16X16 pixels or 16 pixels per 16. In general, a 16x16 block will have 16 pixels in the vertical direction (y = 16) and 16 pixels in the horizontal direction (x = 16). Likewise, an nxn block will generally have N pixels in a vertical direction and N pixels in a horizontal direction, where N represents a non-negative integer value. The pixels in a block can be arranged in rows and columns. Furthermore, the blocks do not necessarily have to have the same number of pixels in the horizontal direction as in the vertical .J direction. For example, blocks can comprise NXM pixels, where M is not necessarily equal to N. Block sizes that are smaller than 16X16 can be referred to as partitions of a 16x16 macro block in ITU-T H.264. Video blocks can comprise blocks of pixel data in the pixel domain, or blocks of transformation coefficients in the transformation domain, for example, following the application of a transformation such - »as the discrete cosine transformation (DCT), a integer transformation, wavelet transformation, or transformation conceptually similar to residual video block data that represent the pixel differences between encoded video blocks and forecast video blocks. In some cases, a video block may comprise blocks of transformation coefficients quantized in the transformation domain. Smaller video blocks can provide better resolution, and can be used for locations in a video frame that include higher levels of detail. In general, the macro blocks and the various partitions, sometimes referred to as sub-blocks, can be considered video blocks. In addition, a slide can be considered to be a plurality of video blocks, such as macro blocks and / or sub-blocks. Each slice can be a decodable unit independently of a video frame. Alternatively, the frames themselves can be decodable units, or other parts of a frame can be defined as decodable units. The term "encoded unit" can refer to any decodable unit regardless of a video frame such as an entire frame, a slice of a frame, a group of images (GOP) also referred to as a sequence, or another unit that can be decoded independently defined according to the applicable coding techniques. Efforts are currently underway to develop a new standard for video coding, currently referred to as High Efficiency Video Coding (HEVC). The emerging HEVC standard can also be referred to as H.265. Standardization efforts are based on a model of a video encoding device referred to as the HEVC Test Model (HM). The HM assumes various capabilities of video encoding devices across devices according to, for example, ITU-T H.264 / AVC. For example, while H.264 provides nine intra-prediction modes, HM provides as much as thirty-three intra-prediction modes, for example, based on the size of a block being encoded by intra-prediction. HM refers to a block of video data as i And a coding unit (CU). Syntax data within a bit stream can define a unit of! larger encoding (LCU) which is a larger encoding unit in terms of the number of pixels. In general, a CU has a similar purpose to an H.264 macro block, except that a CU does not have a size distinction. In this way, a CU can be divided into sub-CUs. In general, references in this description to a CU can refer to a larger encoding unit of an image or a sub-CU of an LCU. An LCU can be divided into sub-CUs, and each sub-CU can be divided into sub-CUs. The syntax data for a bit stream can define 'a maximum number q ;. times an LCU can be divided, with reference to the CU depth. Accordingly, a bit stream can also define a smaller encoding unit (SCU). This description also uses the term "block" to refer to any one of a CU, a forecast unit (PU), or a transformation unit (TU). An LCU can be associated with a quadtree data structure. In general, a quadtree data structure includes one node per CU, where a root node corresponds to the LCU. If a CU is divided into four sub-CUs, the node corresponding to the CU includes four leaf nodes, each of which corresponds to one of the sub-CUs. Each node in the quadtree data structure can provide syntax data for the corresponding CU. For example, a node in the quadtree can include a split indicator, indicating whether the CU corresponding to the node is divided into sub-CUs. The syntax elements for a CU can be defined recursively, and may depend on whether the CU is divided into sub-CUs. A CU that is not split can include one or more forecast units (PUS). In general, a PU represents all or a part of the corresponding CU, and includes data for retrieving a reference sample for 5 the PU. For example, when the PU is encoded by the intra-prediction mode, the PU may include data describing an intra-prediction mode for the PU. As another example, when the PU is encoded by the method, the PU can include data defining a motion vector for the PU. The data that defines the motion vector can describe, for example, a horizontal component of the motion vector, a vertical component of the motion vector, a resolution for the motion vector (for example, a quarter pixel precision or a precision of an eighth of a pixel), a frame of reference to which the motion vector points, and / or a reference list (for example, list 0 or list 1) for the motion vector. The CU data defining the PU (S) can also describe, for example, the division of the CU into one or more PUS. The division modes can differ between whether the CU is not coded, coded by the intra-prediction mode, or coded by the inter-prediction node. A CU having one or more PUS may also include one or more transformation units 3 (TUS). Following the forecast using a PU, a video encoder can calculate a residual value for the part of the CU corresponding to the PU. A set of residual values can be transformed, digitized and quantized to define a set of transformation coefficients. TU defines a data structure that includes transformation coefficients. A TU is not necessarily limited to the size of a PU. Thus, TUs can be larger or smaller than the corresponding PUS for the same CU. In some examples, the maximum size of a TU may correspond to the size of the corresponding CU. In accordance with the techniques of that description, the video encoder 20 can encode certain blocks of video data using the intra-prediction mode encoding, and provide information indicating a selected intra-prediction mode used to encode the block. Video encoder 20 can intra-predict blocks of any type of frame or slice using an intra-prediction mode, for example, I frames or I slices, in addition to P frames or P slices and B frames or slices B. When the video encoder 20 determines that a block must be encoded by intra-prediction mode, video encoder 20 can perform a rate distortion analysis to select a more suitable intra-prediction mode. For example, the video encoder 20 can calculate rate distortion values for one or more intra-prediction modes, and select one of the modes having acceptable rate distortion characteristics. The video encoder 20 can also be configured to determine an encoding context for the block. The context can include several characteristics of the block such as, for example, a block size, which can be determined in terms of pixel dimensions, type of forecast unit (PU) such as, in the example of HEVC, 2Nx2N, NX2N, 2NXN, NSN, types of short-range intra-prediction (SDIP) such as 2NXN / 2, N / 2X2N, 2NX1, 1X2N, a type of macroblock in the example of H.264, an A coding unit depth (CU ) for the block, or other size measurements for a video data block. In some examples, the context may correspond to either any or all of the intra-prediction modes for a neighboring block above, a neighboring block on the left, a neighboring block above and on the left, a neighboring block above on the right, or other blocks neighbors. In some examples, the context may include both intra-prediction modes for one or more blocks in addition to the.> Size information for the current block being encoded. In any case, the video decoder 20 can include configuration data that maps the context to the block for various encoding characteristics for the current block. For example, based on the context for the block, the configuration data may indicate one or more likely intra-prediction modes, an intra-prediction mode index table, and a mapping table. That is, the configuration data may include a plurality of intra-prediction mode index tables and mapping tables, in addition to an indication of one of a number of intra-prediction mode index tables and one of the mapping tables for use in coding an indication of an intra-forecast mode for a current block based on the coding context for the current block. The configuration data can additionally provide an indication of one or more of the most likely modes for the current block based on the coding context. m The number of most likely intra-forecast modes used can be fixed so that one most likely intra-forecast mode is always used, two most likely intra-forecast modes are always used, three most likely intra-forecast modes are always used, and so on, or alternatively, the number of most likely intra-prediction modes may be context dependent, so that some contexts use a more likely intra-prediction mode while other contexts use two or more modes most likely intra-forecast. The mode index table can include a set of intra-forecast modes in addition to indexes mapped to each of the intra-forecast modes. In some examples, the number of available intra-prediction modes 5 may depend on the size of the block being encoded, and therefore the plurality of intra-prediction mode index tables and mapping tables may have different numbers of records. , depending, for example, on a block size being encoded and / or other factors. There may be a one-to-many relationship between the mapping tables and the intra-forecast mode index tables in the configuration data. That is, the same mapping table can be used to encode the selected intra-prediction modes from one or more intra-prediction mode index tables. In this way, the mapping tables can be reused for multiple index tables in intra-forecast mode. Likewise, the same intra-forecast mode index tables can be reused in a variety of contexts, for example, when two or more contexts share the same set of intra-forecast modes and similar or identical reported probabilities of modes intra-forecasting to be used in these contexts. In addition, in some cases, the same intra-prediction mode index table and mapping table can be used for all blocks of a particular size, and the most likely intra-prediction mode can be determined based on, for example, example, intra-prediction modes for neighboring blocks for a block of particular size. In any case, according to the techniques of that description, the video encoder 20 can determine one or more likely modes for a block, based on a coding context for the block, in addition to an intra mode index table -prevision and a mapping table based on the coding context for the block. After selecting the intra-prediction mode for use in block coding 5, video encoder 20 can determine whether the selected intra-prediction mode is one of the most likely intra-prediction modes for the block. If the selected mode is one of the most likely modes, the video encoder 20 can signal the intra-prediction mode using a single bit code word (for example, '0', or '1') or a code word consisting of a series of bits. In addition, the most likely intra-prediction mode may have an index value in the intra-prediction mode index table selected for the block based on the block's coding context. In particular, the intra-prediction mode index table may include a unique index value for each intra-prediction mode in the table. Consider m as representing the value of the index for the most likely intra-forecast mode. since the code word stops. the most likely intra-forecast mode can be flagged separately, the mapping table does not need to include an additional code word for c) the most likely intra-forecast mode. Thus, if the set of available intra-forecast modes has K + 1 elements mapped to a range of indexes ranging from 0 to K, the mapping table can designate K code words for indexes 0 to K-1. In order to determine a codeword according to this illustrative scheme, it is assumed that the selected intra-prediction mode is not the most likely intra-prediction mode, and has a mode index value of j. Let the value n represent the index of the modified intra-forecast mode corresponding to j. According to the previous description, the code word that is mapped to index n is signaled from the encoder to the decoder to indicate the selected intra-prediction mode j. If the mode index value 5 for the selected intra-prediction mode is less than the mode index value of the most likely intra-prediction mode, then video encoder 20 can encode the indication of the intra prediction mode used to encode the current block using the code word corresponding to j. Hey other words, if j <m, then n = j. On the other hand, if the mode index value for the selected intra-prediction mode is greater than or equal to the mode index value of the most likely intra-prediction mode, then video encoder 20 can encode the indication of the intra-prediction mode used to encode the current block using the corresponding code word aj — l. In other words, if j> m, then n = j-l. In cases where more than one most likely intraspection mode is selected, video encoder 20 may signal in the encoded bit stream whether the selected mode is one of the most likely intraspection modes determined using a first bit (for example, example, '0' or '1') or a series of bits. If the selected mode is one of the most likely intra-prediction modes determined, then video encoder 20 can signal which of the most likely intra-prediction modes is the selected mode using a second bit. If the selected mode is not one of the most likely intrinsic predetermined modes, then video encoder 20 can signal which of the other intraspection modes is the selected mode using a code word from a mapping table. Considering again without a loss of generality that the first bit has a value of "0" to indicate that the selected mode is one of the most likely intra-prediction modes and the 20 encoder determines two most intraspection modes. likely, 5 then the video encoder 20 can signal which of the two most likely intra-prediction modes is the selected mode with a value of "00" or "'01" where the first 0 represents the first bit. If the selected mode is not one of the most likely intra-prediction modes, then: video encoder 20 can signal the selected mode by signaling a first bit of "1" followed by a code word. In addition, the two most likely intra-prediction modes may have index values in the intra-prediction mode index table selected for the block based on the block coding context. In particular, the intra-forecast mode index table may include a unique index value for each intra-forecast mode in the table. Let Klj represent the index value for the first most likely intra-forecast mode and m2 represent the index value for the second most likely intra-forecast mode. Since the codeword for the first most likely intra-prediction mode and the second most likely mode can be signaled using a first bit and a second bit as described above, the mapping table need not include additional codewords for the first most likely intra-forecast mode and the second most likely intra-forecast mode. Thus, if the set of intraspection modes most likely has K + 1 elements mapped in a range of indexes ranging from 0 to K, the mapping table can designate K -1 code words for indexes from 0 to K -two. In order to determine a codeword according to this illustrative scheme where two most likely modes are identified, it is assumed that the selected intra-prediction method is not one of the most likely 5-prediction modes, and has a value of mode index of j- Let the value n represent the modified intra-forecast mode corresponding to j. According to the previous description, the code word that is mapped to index n is signaled from the encoder to the decoder to indicate the selected intra-prediction mode j. IF the mode index value for the selected intra-prediction mode is less than the mode index value of the most likely first intra-prediction mode, then the video encoder 20 can encode the indication of the intra-prediction mode. forecast used to encode the current block using the code word corresponding to j. In other words. if j <m1, then n = j. On the other hand, if the mode index value for the selected intra-forecast mode is greater than or equal to the mode index value of the first most likely intra-forecast mode, but less than the second intra-forecast mode most likely, then the video encoder 20 can encode the indication of the intra-prediction mode used to encode the current block using the code word corresponding to jl. In other words, if j> ml and j <m2, then n = j - 1. Finally, if the mode index value for the selected intra-prediction mode is greater than the mode index value of the first most likely intra-prediction mode, and the second most likely intra-prediction mode, then the encoder video 20 can encode the indication of the intra-prediction mode used to encode the current block using the corresponding codeword aj - 2. In other words, if j> mej> m2, then n = j - 2. The tables of mapping to the remaining intra-forecasting modes can be constructed more efficiently by the new designation of indices in order to compensate for the fact that the most likely modes are not included in the mapping tables 5, which can add to the economy of bit when one or more likely non-selected modes would have designated code words. The video encoder 20 can, in some examples, be configured to start the analysis for the selection of an intra-forecast mode with the most likely mode, based on the context. When the most likely mode achieves the appropriate rate distortion characteristics, in some examples, video encoder 20 can select the most likely mode. In other examples, c) video encoder 20 does not need to start the selection process in the most likely way. Following intra-predictable or inter-predictable coding to produce forecast data and residual data, and following any transformations (such as 4X4 or 8x8 integer transformation used in H.264 / AVC or a DCT discrete cosine transformation) for produce transformation coefficients, quantization of the transformation coefficients can be performed. Quantization generally refers to a process in which transformation coefficients are quantized to possibly reduce the amount of data used to represent the coefficients. The quantization process can reduce the bit depth associated with some or all of the coefficients. for example, a value of n bits can be rounded down to a value of m bi-ts during quarrtization, where n is greater than m. Following quantization, the entropy coding of the quantized data can be performed, for example, according to the content adaptive variable length coding (CAVLC), context adaptive binary arithmetic coding (CABAC) or other entropy coding methodology. A processing unit configured for entropy coding, or another processing unit, can perform other processing functions, such as coding zero working length of quantized coefficients and / or generating syntax information such as block pattern values. coded (CBP), macro block type, coding mode, max. macro block size for a coded unit (such as a frame, slice, macro block or sequence) and the like. The video decoder 30 can finally receive encoded video data, for example, from modeni '28 and receiver 26. According to the techniques of that description, the video decoder 30 can receive a code word representative in a way intra-prediction used to encode a block of video data. The code word can be encoded by the video encoder 20 using a CABAC process and can be decoded by the video decoder 30 using an alternate CABAC process. The video decoder 30 can be configured to determine an encoding context for the block in a manner substantially similar to the video encoder 20. Furthermore, the video decoder 30 can include configuration data similar to the video encoder 20, for example, most likely indications, an intra-prediction mode index table, and a mapping table for each coding context. When a most likely intra-prediction mode is used, a single bit can be used to indicate whether the intra-prediction mode used to encode the block is the most likely mode. If the selected mode is determined not to be the most likely mode, then the video decoder 30 can determine the intra-prediction mode 5 used to encode the video data block in a generally alternating way from that video encoder 20 Specifically, again let n represent the modified intra-prediction mode index to which a code word received in the mapping table is mapped, j represents the intra-prediction mode index to be used to decode the coded block em represents the mode index in the most likely way. If the modified intra-prediction mode index n is less than the mode index of the most likely mode m, then the video decoder 30 can decode the encoded block using the intra-prediction mode having index n. That is, if n <m, then j - n. On the other hand, if the modified intra-prediction mode index n is greater than or equal to the mode index of the most likely mode m, then video encoder 30 can decode the encoded block using the intra-prediction mode having index n + 1. In other words, if n> m, then j = n + 1. When using two or more likely intra-prediction modes, if the codeword comprises a first bit indicating a selected mode is one of the two most likely intra-prediction modes, then the video decoder 30 can determine the intra-prediction method used to encode the coded block based on additional bits identifying which of the two most likely intra-prediction modes corresponds to the selected mode. If the first bit indicates that the selected mode is not one of the two most likely intra-prediction modes, then the video decoder 30 can determine the intra-prediction mode used to encode the video data block in a generally " alternated to that of video encoder 20. 5 Specifically, again let n represent the modified intra-prediction mode index to which a codeword received in the mapping table is mapped, j represents the intra-prediction mode index to be used to decode the coded block, m represents the mode index of the first most likely mode, and m2 represents the mode index of the second most likely mode. As mentioned earlier, it can be assumed that the mode index n is less than the mode index of the most likely first mode m, so the video decoder 30 can decode the encoded block using the intra-prediction mode having the index n . That is, if n < Rl, then j = n. Otherwise, if the modified intra-prediction mode index plus one (n + 1) is less than the mode index for the second most likely mode m2, then video decoder 30 can decode the encoded block using the intra-prediction having the index n + 1. In other words, if n + 1 <m2, then j = n + 1. Otherwise, c) video decoder 30 can decode the encoded block using the intra- forecast having the index n + 2. In other words, if n + 1> m2, then j = n + 2. For two more likely modes, the mapping of anode indices to modified intra-prediction mode indices, as performed by video encoder 20, can thus be represented by the following pseudocode: if (j> m2) n = j - 2 or if (j> In1) n = j - 1 or n = j · 5 For the N most likely modes, where m represents the first most likely method and mN represents the most likely mode N, the mapping of mode indices to modified intra-prediction mode indices , as performed by video encoder 20 can, B in this way, be represented by the following pseudocode: . if (j> InN) N = j - N or if (j> mN-1) n = j - N + 1 0 q 0 0 or if (j> m2) n = j - 2 or if (j> m) n "'j - 1 or n = j · For the two most likely inodes, the mapping of an intra mode index -modified preview for a mode index, as performed by video decoder 30, can thus be reinterpreted by the following pseudocode: if (n <m) j - n; or if (n + 1 <m2) j = n + 1; or j = n + 2 for N most likely modes, the mapping of a modified intra-prediction mode index to a mode index, as performed by the video decoder 30, can thus be represented by the following 5 pseudocode: if (n <m1) j = n; or if (n + 1 <m2) j = n + 1; or if (n + 2 <m) j = n + 2; or if (n + 3 <m4) j = n + 3; or if (n + (N-l) <mN j = n + (N -1) or j = n + N. I According to the techniques of this description, modes can also be mapped directly into code word indexes that indicate the corresponding code words. Similar to the modified intra-prediction mode indexes described above, instead of sending the code word with an index corresponding to the mode index, a bit saving can be achieved by / sending a code word with a modified code word index, where the modification is a result of using code word indexes originally associated with the most likely modes to indicate the modes that are not the most likely modes. since the most likely modes are signaled using a starting bit or series of bits as described above, the most likely modes can be excluded from consideration when signaling a codeword index in a way that is not one of the modes most likely. As a result, a keyword index that is Qriginally mapped to one of the most likely modes can be used to indicate a mode that is not one of the most likely modes. Regardless of whether the codeword is mapped to a modified intraspection index or a modified codeword index, the codeword can be coded using a CABAC process. Assuming, for example, two most likely intra-forecast modes used, as with the modified intra-forecast modes above, if the set of available intra-forecast modes has K + 1 elements mapped to a range of codeword ranging from 0 to K, the modified codeword index table can designate K - 1 codeword for the codeword indexes from 0 to K-2. Assuming that C represents a codeword index and Cmod represents a modified codeword index. Cm is additionally assumed to represent the lowest most likely code word index, Cm2 to represent the second lowest code word index corresponding to a most likely mode, and so on. As will be explained in more detail below, the mapping of modes in code-word indexes can be dynamic. Thus, a more likely first mode with a lower mode index may also not have a lower code word index. Accordingly, Cm1 may not necessarily correspond to a more likely first mode, C, q may not correspond to a more likely second mode, and so on. For the most likely N modes, the mapping of codeword indices to modified codeword indices, as performed by video encoder 20, can thus be represented by the following pseudocode: if (CZ CJ Cmod = C - N 5 or if (CZ CmN-1 Ç ,,,,, d = C - N + 1 DQV or if (C> Cm2) Cmod = C - 2 or if (C> C ,, j) Cmod = C— 1 or CInod = C · For N most likely modes, the mapping of codeword indices into indexes of codeword, as performed by video decoder 30, can still be represented by the following pseudocode: if (C> Cml) C = Cmod or if (CInQd + 1 <CIn2) C = Crnod + 1 or if (Cmod + 2 <C, n3) C = Cmod + 2; or if (CmDd + 3 <Cm4) C = Cmod + 3; V «* or if (Cmod + (N-1) <C, nN C = Cmod + (N-1) or C = Cmod + N. Video encoder 20 and video decoder 30 can each be implemented as any one of a variety of suitable encoder or decoder circuit packs, as applicable, such as with one or more microprocessors, digital signal processors (DSPS ), application specific integrated circuits (ASICs), field programmable port sets (fpgas), discrete logic circuit set, software, hardware, firmware or any combination thereof. Each video encoder 20 and video decoder 30 can be included in one or more encoders or decoders, any of which can be integrated as part of a combined video encoder / decoder (CODEC). An apparatus including a video encoder 20 and / or video decoder 30 may comprise an integrated circuit, a microprocessor, and / or a wireless communication device, such as a cell phone. Figure 2 is a block diagram illustrating an example of video encoder 20 that can implement techniques for encoding information indicative of an intra-encoding mode. The video encoder 20 can perform the intra and inter-encoding of blocks within the video frames, including macroblocks, or macroblocks partitions or subpartitions. Intra-coding is based on spatial forecasting to reduce or remove spatial> redundancy in video within a given video frame. Inter-coding is based on time prediction for »reducing or removing time redundancy in video within adjacent frames of a video sequence. The inter-prediction mode (mode I) can refer to any of the space-based compression modes and the intervals such as unidirectional prediction (P mode) or bidirectional prediction (B mode) can refer to any compression inodes with time based. Although the components for the intermediate coding are described in figure 2, it should be understood that the video encoder 20 can additionally include components for the intra-prediction mode coding. However, such components are not illustrated for the sake of brevity and clarity. As illustrated in figure 2, video encoder 20 receives a current video block within a video frame to be encoded. In the example in figure 2, the video encoder 20 includes the motion compensation unit 44, the motion estimation unit 42, the memory 64, the adder 49, the transformation module 52, the quantization unit 54, and the entropy coding unit 56. For video block reconstruction, video encoder 20 also includes inverted quantization unit 58, inverted transformation module 60, and adder 62. An unlock filter (not shown in the figure 2) can also be included to filter the block boundaries to remove blocking artifacts from the reconstructed video. If desired, the deblocking filter will typically filter the output of adder 62. During the encoding process, video encoder 20 receives a video frame or slice to be encoded. The frame or slice can be divided into multiple blocks of video. The motion estimation unit 42 and the compensation and motion unit 44 perform inter-prediction encoding of the received video block with respect to one or more blocks in one or more reference frames to provide temporal compression. Intra-prediction module 46 can perform intra-prediction encoding of the received video block with respect to one or more neighboring blocks in the same frame or slice as the block to be encoded to provide spatial compression. The mode selection unit 40 can select one of the coding modes, intra or inter, for example, based on error results and based on a frame or slice type for the frame or slice including a current block being encoded , and provides the resulting intra- or inter-coded block for adder 49 for the generation of residual block data and for adder 62 for reconstruction of the coded block for use in a reference frame or reference slice. In general, intra-forecasting involves forecasting a current block with respect to neighboring blocks previously coded, while inter-forecasting involves estimating movement and compensating movement to temporarily forecast the current block. The motion estimation unit 42 and the motion compensation unit 44 represent the inter-prediction elements of the video encoder 20. The motion estimate unit 42 and the motion compensation unit 44 can be highly integrated, but are illustrated separately for conceptual purposes. Motion estimation is the process of generating motion vectors, which estimates motion for video blocks. A motion vector, for example, can indicate the displacement of a forecast block within a forecast reference frame (or other encoded unit) "with respect to the current block being encoded within the current frame (or another encoded unit). A forecast block is a block that is closely matched to the block to be encoded, in terms of pixel difference, which can be determined by the sum of the absolute difference (SAD), sum of the square difference (SSD), or other metrics difference. A motion vector can also indicate the displacement of a partition of a macro block. Motion compensation can involve collecting or generating the forecast block based on a motion vector determined by the motion estimate. Again, the motion estimation unit 42 and the motion compensation unit 445 may be functionally integrated, in some examples. The motion estimation unit 42 calculates a motion vector for the video block of 'an inter-encoded frame by comparing the video block with the video blocks of a reference frame in the reference frame store 64. The unit motion compensation 44 can also interpolate the subintelligent pixels of the reference frame, for example, an I frame or a P frame. The ITU H.264 standard, as an example, describes two lists: list 0, which includes reference frames having a display order after a current frame being coded, and list 1, which includes reference frames having an order after the current frame being encoded. Therefore, the data stored in the reference frame store 64 can be organized according to these lists. The motion estimation unit 42 compares the blocks of one or more reference frames from the reference frame storage 64 with a block to be encoded from a current frame, for example, a P frame or a B frame. When the reference frames in the reference frame store 64 include values for subintelligent pixels, a motion vector calculated by the motion estimation unit 42 may reference a subintellor pixel location of a reference frame. The motion estimation unit 42 and / or the motion compensation unit 44 can also be configured to calculate values for the bottom pixel positions of the reference frames stored in the reference frame store 64 if no value for the pixel positions sub-whole is stored in the frame reference store 64. The motion estimation unit 42 5 sends the calculated motion vector to the entropy coding unit 56 and the motion compensation unit 44. The frame reference block identified by a motion vector can be referred to with a forecast block. The motion compensation unit '44 can calculate the forecast data based on the inter-forecast block. The intra-forecast module 46 can intra-forecast a current block, as an alternative to the inter-forecast performed by the movement estimation unit 42 and the movement compensation unit.> As described above. in particular, the intra-prediction module 46 can determine an intra-prediction mode to be used for encoding a current block. In some examples, the intra-prediction module 46 may encode a current block using various intra-prediction modes, for example, during separate encoding passes, and the intra-prediction module 46 (or mode selection unit 40, in some examples) you can select a suitable intra-prediction mode to be used from the tested modes. For example, the intra-prediction module 46 can calculate the rate distortion values using a rate distortion analysis for the various tested intra-prediction modes, and select q intra-prediction mode having the best data distortion characteristics. rate between tested modes. Rate distortion analysis generally determines an amount of distortion (or error) between an encoded block and an original uncoded block that was encoded to produce the encoded block, in addition to a bit rate (that is, a number of bits) used to produce the coded block. The intra-prediction module 46 can calculate the ratios of the distortions and rates for the various coded blocks to determine which intra-prediction mode 5 exhibits the best rate distortion value for the block. In any case, after selecting an intra-prediction mode for a block, the intra-prediction module 46 can provide information indicative of the intra-prediction mode selected for the block for the entropy coding unit 56. The entropy coding unit 56 can encode the information indicating the intra-prediction mode selected using CABAC according to the techniques of that description. As shown in Figure 2, video encoder 20 may include configuration data 66, which may include a plurality of intra-prediction mode index tables and a plurality of modified intra-prediction mode index tables (also referred to code-word mapping tables), definitions of the coding contexts for various blocks, and indications of a more likely intra-prediction mode, an intra-forecast mode index table, and a modified intra-forecast index table for use for each of the contexts. Table 1 below represents an example of inti mode indices: a-prediction, corresponding intra-prediction modes, and an indication of which mode is the most likely mode for a particular context. Table 1 also illustrates the modified intra-prediction mode indices that map each mode index, in this particular example. Table 2 provides an illustrative mapping table that maps the code words to the modified intra-prediction mode indexes that generally correspond to the mode indexes in Table 1. As discussed below, more likely to be more likely can be used, but the examples in Tables 1 and 2 assume only the most likely mode being used. Because only one most likely mode is used, Table 2 includes a smaller entry than Table 1. Since the most likely mode is flagged separately from the remaining modes, the mode index 5 does not have a mode index. corresponding modified intra-forecast. Similarly, Table 2 need not include a code word for the most likely mode. Table 1 Mode Index: Most Likely Mode Index Modified Intra-forecast Mode 0 DC NO 0 0 Vertical NO 1 2 Horizontal NO 2 3 diagonal to NO 3 low / right 4 Diagonal to iSWoÀ 4 low / left 5 Vertical to ~ X a right 6 Vertical to NO 5 to the left 7 Horizontal NO 6 up Horizonta1 7 down Table 2 Mode Index of I-ntra- MQdified prediction code word 0 000 1 001 2 010 3 011 4 100 5 101 6 110 7 111 For example purposes, let m represent the mode index in the most likely mode in Table 1, and let n represent the modified intraspection mode index corresponding to j. If the selected node is the 5 most likely mode, then a first bit (for example, '0') is used to represent the mode, in this example, and the mode is determined as the most likely mode, as indicated by Tabla 1 (vertical to right, in this example ). If 'a first bit beyond 0 (that is, a "1" ") is sent, then the mode is not the most likely mode. Let n correspond to the modified intra-prediction mode index indicated by the code word that is sent to represent the mode. The code word that is mapped to index n is signaled from the encoder to the decoder to indicate the selected intra-prediction mode j. If the mode index value for the selected intra-prediction mode is less than the mode index value of the most likely intra-prediction mode, then video encoder 20 can encode the indication of the used intra-prediction mode to code the current block using the code word corresponding to j. In other words, if j <m, then n = j. On the other hand, if the mode index value for the selected intra-prediction mode is greater than or equal to the mode index value of the most likely intra-prediction mode, then video encoder 20 can encode the indication of the intra-prediction mode used to encode the current block using the corresponding code word 5 aj - 1. In other words, if j> m, then n = jl. A decoder, such as decoder 30, will generally perform the opposite mapping of encoder 20. Thus, decoder 30 can determine that if n <m, then the mode index is equal to n. On the other hand, if n Z m, then the mode index is equal to n + 1. In other words, if the modified intra-prediction mode index (for example, the modified intra-prediction mode index from Table 2 corresponding to the code word that is sent) is greater than or equal to the index in the most likely mode (from Table 1, in this example), then the intra-prediction mode is actually indicated by n + 1, instead of n. Thus, when the mode index for the intra-prediction mode used to encode the current block (for example, n + 1) is greater than the index for the most likely coding mode (m), the code word used to represent the selected intra-prediction mode corresponds to a modified intra-prediction mode index (n) which is once less than the mode index (n + 1) for the intra-prediction mode used to encode the current block . As an example with respect to the examples in Tables 1 and 2, supposing that for a current block, which has a context indicating a more likely vertical to the right, the selected mode is horizontal down. The index m for the most likely mode is 5, in this example, while the mode index for the selected mode (according to Table 1) is 8. In this example, v.is that the mode index for the selected mode is greater than the mode index for the most likely mode, then n = jl, where n is the modified intra-prediction mode index and is equal to 7. Thus, with Table 2 the 5 video encoder 20 will use the code word 11l to represent the selected mode, in this example. Code word 111 can follow a starting bit indicating that the selected mode is not a more likely mode. Accordingly, video encoder 30 (figures 1 and 4) will receive the start bit and code word 111 and determine that the value of n is 7. since 7 is greater than 5 (that is, n> m) in this example, the video decoder 30 will recover the mode of Table 1 having the mode index n + 1, which is 9, corresponding to the horizontal downwards, in this example. As another example, again with respect to the examples in Tables 1 and 2, it is assumed that for the current block, the selected mode is dc. Again, the index m for the most likely mode is 5 in this example while the mode index for the selected mode (according to Table 1) is 0. In this example, since the mode index for the selected mode is less than the mode index for the most likely mode, the mode index is equal to n, where n is the modified intra-prediction mode index. Thus, with Table 2 the video encoder 20 would use the 'code word 000' to represent the selected mode, in this example. Code word 000 would follow an initial bit indicating that the selected mode is not a more likely mode. Accordingly, video decoder 30 (figures 1 and 4) will receive the start bit and code word 000 and determine that the value of n is 0. since 0 is less than 5 (that is, n <m) in that For example, the video decoder 30 will recover the mode from Table 1 having the mode index n, which is 0, corresponding to the DC, in this example. Table 3 below shows an example of intra-forecast mode indexes, corresponding intra-forecast modes, and an indication of which mode is the most likely mode for a particular context. Table 53 also illustrates the modified intra-prediction mode indices that map to each mode index, in this particular example. Table 4 provides an illustrative mapping table 'that maps code words to modified intra-forecast mode indexes that generally correspond to Table 3 mode indexes. As discussed above, more than two most likely modes are also used, but the examples in Table 3 and 4 assume that only two most likely modes are used. since two most likely modes are used, Table 4 contains two smaller entries than Table 3. Table 3 Most Probable Mode Index Mode Index Modified Intra-forecast Mode 0 DC NO 0 1 Vertical NO 1 2 Horizontal NO 2 "7 -J l diagonal to NO 3 I down / right 4 · Diagonal to YES X down / left 5 Vertical for im 4 on the right 6 Vertical for SIM X on the left 7 Horizontal NO 5 up 8 Horizontal im 6 for bottom Table 4 Intra Mode Index Predictive code word Modified 0 000 1 001 2 010 3 011 4 100 5 101 6 110 In particular, let m represent the mode index of the first most probable inode in Table 3, and m2 represents the mode index of the second most likely mode. 5 If the selected mode is one of the most likely modes, then a first bit (for example, "0") is used to signal that the mode is one of the two most likely modes. If the mode is one of the two most likely modes, then a second bit is used to signal which of the two likely local modes corresponds to the selected mode. In this way, the two most likely modes can be signaled with initial bit sequences of "00" and "01" respectively. If a first bit other than "0" (that is, a "1") is sent, then the selected mode is not one of the two most likely modes. Let n correspond to the modified intra-prediction mode index indicated by the code word that is sent to represent the mode. The video encoder 20 can determine the mode index of the selected mode (j) and map the mode index to a modified mode index (n). If j Z m2, then n = j-2. Otherwise, if j> m ,, then n = j it. Otherwise, n = j. The video decoder 30 receives the modified intra-prediction mode index (n) and can first compare n with m "If n <m ,, then c) mode index (j) is equal to n. If m does not is less than m, then n + 1 can be compared to m2 If n + 1 <m2, then the mode index is equal to m, otherwise the mode index is equal to n2. As an example with respect to the examples in Tables 3 and 4, it is assumed that for a current block, which has a context indicating the most likely modes of vertical left and diagonal down and left, the selected mode is the horizontal down. The me m2 indexes for the most likely modes are 4 and 6 in this example, while the mode index j for the selected mode (according to Table 3) is 8. In this example, since the mode index for the mode selected is greater than the mode index for both the most likely modes, the mode index j is equal to n + 2, where n is equal to the index of the intra-forecast mode modified in Table 4. Thus, if the mode j is equal to 8, then n = 6. Thus, video encoder 20 would use code word 110 to represent the selected mode, in this example. Accordingly, the video decoder 30 (figures 1 and 4) will receive the code word 110 and determine that the value of n is 6. Since 6 is not less than 4 (that is, n> m) and 6 plus 1 is not less than 6 (that is, n + 1 Z m2) in this example, video decoder 30 would retrieve the mode from Table 3 having a mode index j equal to n + 2, which is 8, corresponding to the horizontal down, in this example. As another example, again with respect to the examples in Tables 3 and 4, it is assumed that for the current block, the selected mode is DC. Again, the Klj and m2 indices for the most likely modes are 4 and 6, in this example, 5 while the mode j index for the selected mode (according to Table 3) is 0. In this example, since the mode for the selected mode is less than the mode indices for both of the most likely modes (ie, n <m), the mode index j is equal to n, where n is equal to the index of the modified intra-forecast mode in Table 4. Thus, n is equal to 0. Thus, based on Table 4, video encoder 20 would use code word 0 to represent the selected mode, in this example. Code word 000 follows a starting bit or series of bits indicating the selected mode is not a more likely mode. Accordingly, the video decoder 30 (figures 1 and 4) will receive the starting bit or series of bits and the code word 000 and will determine that the value of n is 0. Since 0 is less than 4 and 6 (this is , n <m1) in this example, the video decoder 30 would recover the mode of Table 3 having the mode index j which is equal to n, which is 0, corresponding to DC, in this example. As another example with respect to the examples in Tables' 3 and 4, it is assumed that for a current block the selected mode is vertical-right. The IIlj and m2 indices for the most likely modes are 4 and 6, in this example, while the mode index j for the selected mode (according to Table 3) is 5. In this example, since the mode index for the mQdo selected is greater than or equal to the mode index for the first most likely mode, but less than the mode index for the second most likely mode, the index - so j is equal to n + 1, where n is equal to index of the intra-prediction mode modified in Table 4. Thus, if the index of mode j is 5, then n = 4. Thus, the video encoder 20 would use the codeword 110 to represent the selected mode, in this example. Code word 110 follows a starting bit or series of bits 5 indicating that the selected mode is not a more likely mode. Accordingly, the video decoder 30 (figures 1 and 4) would receive the starting bit and series of bits and the code word 110 and determines that the value of n is 4. Since 4 is greater than or equal to 4, but 4 plus 1 is less than 6 (that is, - n> m, but n + l <Irl2) in this example, the video decoder 30 would retrieve the mode from Table 3 having the mode index equal to n + 1, which is 5, corresponding to vertical-right, in this example. It should be understood that Tables 1, 2, 3 and 4 are merely examples of tables in the most likely modes, indexes for the modes, and code words designated for various indexes. In other examples, other modes can be 'determined as most likely, for example, based on the coding context for a given block. For example, the most likely mode can be determined based on the encoding modes used to encode neighboring blocks on the left and top. Configuration data 66 may include a plurality of different tables associated with the different coding modes identified as the most likely mode, generally similar to the examples in Tables 1 and 3. Likewise, configuration data 66 may include a plurality of code word mapping tables, such as Tables 2 and 4, which map indexes into code words. In general, tables 1 and 3 can be referred to as mode index tables, while tables 2 and 4 can be referred to as modified intra-prediction mode index mapping tables, or simply mapping tables. As noted above, tables 1 and 2 are merely an example of a mode index table and a mapping table. In some examples, configuration data 66 may include data for a plurality of index index tables and a plurality of mapping tables. In some examples, multiple coding contexts may correspond to a common-mode index table. Likewise, multiple mode index tables can be mapped to a common mapping table. According to the techniques of this description, modes can also be mapped into codeword indices instead of modified intra-prediction mode indices. The codeword indices can then be mapped to modified codeword indices, which are used to query the codewords. Table 5 below represents an example of intra-prediction mode indexes and modes that are mapped into a code word index. Table 5 also illustrates an indication of which modes are the most likely modes for a particular context, and illustrates the modified codeword indices corresponding to the codeword indices for this particular example. Table 5 Index I Mode I Plus I Index I Index of i of Mode I probable i of I Intra- j Word- | forecast l code! Modified 0 I DC i NO i 3 i 2 1 i Vertical: NO l 0: 0 2 Horizontal NO 4 3 3 I diagonal for I YES: 5 IX low / right 4 DiagQnal for m 6 4 low / left 5 Vertical for NO 1 1 to the right 6 Vertical to YES 2 X to the left 7 Horizontal NO 7 5 up 8 Horizontal NO 8 6 down Table 6 Modified Prediction Code Word Index 0 000 1 001 2 010 3 011 4 100 5 101 6 110 For the purposes of the example, let Cml represent a code word index in a more probable way and Cm2 represents a code word index in another way more 5 likely, where Cm1 has a lower codeword index value than Cn, 2. As discussed above, C ,, j and C, n2 are determined on the basis of codeword index values as opposed to mode index values. Thus, Cm1 may not necessarily correspond to the first most likely mode, and Cm2 may not necessarily correspond to the second most likely mode. In the example of Table 5, for example, mode 3 (diagonal to bottom / right) is the first most likely mode, since it has the lowest mode index of the most likely modes, and mode 6 is the second most likely mode. The second most likely method, however, has a correspondingly lower code word index 5 than the first most likely mode. Thus, in the example in Table 5, Cml corresponds to the code word index of the second most likely mode, and Cm2 corresponds to the code word index of the first most likely mode. In the following description, Cml is considered to be less than Cm2. If the selected mode is one of the most likely modes, then a first bit (for example, "0") is used to signal that the mode is one of the two most likely inodes. If the mode is one of the two most likely modes, then a second bit is used to signal which of the two most likely modes corresponds to the selected mode. In this way, the two most likely modes can be signaled with the initial bit strings of "00" and "01", respectively. If a first bit other than "0" (that is, "1 '") is sent, then the selected mode is not one of the two most likely modes, and the selected mode is sent with a code word corresponding to a codeword index. Rather than sending a code word that corresponds directly to a code word index for the selected mode, however, a bit saving can be achieved by video encoder 20 by sending a code word corresponding to a keyword index. modified code. The video decoder 30 can receive the codeword corresponding to the coded codeword index and then determine the codeword index corresponding to the selected intra-prediction mode. The video encoder 20 can determine the code word index of the selected mode (C) and map the mode index to the modified mode index (CmQd). If CZ Cm2, then Cmod = C - 2. Otherwise, if C> C ,, j, then CmDd = C - 1. Otherwise, Cmod = C. The video decoder 30 receives the intra-prediction mode index 5 modified (CmQd) and you can first compare it with Cm1. If Ç ,,,, d <Cm1, then the mode index (C) is equal to C, mod. If Cmod is not less than Cml, then Cmo, i + 1 can be compared with Cm2. If C ,,, Q, í + 1 <Cm2, then the mode index is equal to Cmod + 1. Otherwise, the mode index is equal to Cmod + 2. As an example with respect to the examples in Tables 5 and 6, it is assumed that for a current block, which has a context indicating the most likely modes of vertical-left (mode index 6) and diagonal down / right (index of mode 3), the selected mode is horizontal / down (mode index 8). The code word indexes Cm1 and Cm2 for the most likely modes are 2 and 5, in this example, while the mode index for the selected mode (according to table 5) is 8. According to Table 5 , the inode indices 3, 6 and 8 map to code word indices 5, 2 and 8, respectively. In this example, since the codeword index for the selected mode (that is, the codeword index 8) is greater than the codeword indexes for both of the most likely modes (that is, code 5 and 2), the code word index is equal to CmQd + 2, where Cmod is the modified code word index corresponding to a code word in Table 6. Thus, if the code word index of the selected mode is equal to 8, then Cmod a = 6. Thus, the video encoder 20 can use the codeword 110 to represent the selected mode, in this example. Accordingly, the video decoder 30 (figures 1 and 4) will receive the code word 110 and will determine that the value of Cmod = 6. Since 6 is not less than 2 (this is C, qq, Í Z Cmj) and 6 plus 1 is not less than 5 (that is, Cmod + 1 Z Cm2) in this example, video decoder 30 will retrieve the mode from Table 5 having the mode index Cmod + 2, which is 8, corresponding to horizontal-down, 5 in this example. As another example, again with respect to the examples in Tables 5 and 6, we assume that for the current block, the selected node is vertical (mode index 1 and code word index 0). Again, the C, j and Cm2 indices for 10 the most likely modes are 5 and 2. In this example, since the codeword index for the selected mode is less than the codeword indices for both the most likely modes (that is, C <Cm1), the modified code word index C, nod and equal to the code word index. Thus, the encoder 20 would use the code word 000 to represent the selected mode, in this example. Code word 000 follows a starting bit or series of bits indicating that the selected mode is not a more likely mode. Accordingly, the video decoder 30 (figures 1 and 4) would receive the starting bit or series of bits and the code 000 and determine that the value of Cmod is 0. Since 0 is less than 5 and 2 (this is , Cmod <Cml) in this example, the video decoder 30 would recover the mode of table 5 having the mode index C equal to Ç ,,,, d which is equal to 0 corresponding to vertical, in this example. As another example in relation to the examples in tables 5 and 6, it is assumed that for a current block the selected mode is horizontal (mode index 2 and code word index 4). The Cm1 and Cm2 indices for the most likely modes are 2 and 5 in this example. In this example, since the code word index for the selected mode is greater than or equal to C ,, j, but less than Cm2, the code word index is equal to Cmod + 1, where Cmod is the index of modified codeword. Thus, if the code-word index is 4, then Cmod = 3. Thus, the video encoder 20 would use the code word 011 to represent the selected mode, in this example. Code word 011 follows an initial bit or series of bits indicating that the selected mode is not a more likely mode. Accordingly, the video decoder 30 (figures 1 and 4) would receive the starting bit or series of bits and the code word 011 and determine that the Cmod value is 3. Since 3 is not less than 2, but 3 plus 1 is less than 5 (this is, CmDd ZC ,, j, but Cmod + 1 <Cm2) in this example, the video decoder 30 would retrieve the mode from table 5 having an index equal to Cmod + 1, which is equal to 4, corresponding to the horizontal mode. It should be understood that tables 1 through 6 are merely examples of the tables of the most likely modes, indexes for the modes, code word indexes, and code words designated for the various indexes. In other instances, other intra-prediction modes can be determined as the most likely mode, for example, based on the coding context for a given block. For example, the most likely mode can be determined based on the intra-prediction modes used to encode neighboring blocks on the left and top. The configuration data 66 may include a plurality of different tables associated with different intraspection modes identified as being the most likely mode in addition to a different number of identified most likely modes, generally similar to the examples in Tables 1, 3 and 5. Likewise, configuration data 66 can include a plurality of codeword mapping tables, such as tables 2, 4 and 6 that map the remaining intra-prediction mode indexes to keywords eÓdigo. As described above, the most likely modes are signaled using a starting bit or a series of bits. Such a starting bit or series of bits may also depend on context. For example, a different series of bits can be used to signal the most likely modes depending on which intra-prediction modes are identified as the most likely mode in addition to how many most likely modes are identified. The most likely modes and coding tables for any given case 10 can also be defined based on other types of contexts, rather than or in addition to the neighboring block prediction mode used in this case. The examples in Tables 1, 2, 3, 4 and 5 are provided with respect to nine 15 H.264 intra-prediction modes. However, it must be understood that the techniques of this description can be applied to other coding standards and techniques, such as High Efficiency Video Coding (HEVC). In some examples, such as in HEVC, the number of available intra-prediction modes may depend on the size of a block (e.g., a "coding unit" or "CU" in HEVC) being encoded. For each intra-prediction mode, a mode index can be assigned based on a probability of occurrence for each intra-prediction mode. Figure 3 illustrates an example 25 of the intra-prediction modes and correspondingly corresponding indexes that can be used with HEVC. The arrows in figure 3 represent a forecast direction, the numbers represent a mode index. Table 7 below provides a correspondence between a CU size and a number of 30 available intra-prediction modes to encode CUS of that size. Corrio can be seen in table 7, CUS 8X8, 16x16 and 32X32 can use the 35 intra-prediction modes illustrated in figure 3, while CUs of 4X4 and El. 64X64 use a smaller set of intra-prediction modes. Table 7 Unit Size | Number of Intra- I forecast coding 4X4 18 8X8 | 35 16xl6 I 35 32X32 I 35 64x64 I 4 In the examples where the number of intra-5 prediction modes varies based on the block size, the configuration data 66 may include different tables for different block sizes. Accordingly, a context for encoding an indication of an intra-prediction mode used to encode a block may include a block size, in addition to encoding modes used to encode neighboring blocks. The entropy coding unit 56 can select the mode index table and code word mapping table used to select a code word representative of the selected intra-prediction mode used to code the block based on the context for the block. In addition, mode index tables for blocks of a particular size may have numbers of records equal to the number of intra-prediction modes for blocks of that size. Thus, the mode index tables for blocks of size 4X4 can have 18 records, the mode index tables for blocks of size 8x8, 16x16 and 32x32 can have 35 records, and the mode index tables for q blocks of size 64x64 can have 4 records. Other block sizes, for example, 128xl28, may also have a certain number of intra-forecast modes available. The intra-preview modes available for blocks of size 8X8, 16X16 and 32x32 may not be the same, and therefore the same mode index tables can be used for blocks of sizes 8x8, 16x16 and 32X32. 5 Although the same modes may be possible for blocks of these sizes, however, the likelihood of using a particular mode to encode a block may vary based on the size of the block. Accordingly, the entropy coding unit 56 can determine a codeword mapping table for a particular mode index table COKl based on the block size for which an intra-prediction mode must be signaled, in some examples . For example, Tables 2, 4 and 6 above are simply illustrative tables to represent various modes of encoding. However, it must be understood »that other types of code words can be used in other examples. Any set of code words can be used for the code word mapping table (that is, the modified intra-prediction mode mapping table or index table), as long as each code word is decoded in a different way. singular form. After predicting a current block, for example, using intra-forecast or inter-forecast, the video encoder 20 can form a residual video block by subtracting the forecast data calculated by the motion compensation unit 44 or motion module. intra-prediction 46 of the original video block being encoded. The adder 49 represents the component or components that perform this subtraction operation. Transformation module 52 applies a transformation, such as a discrete cosine transformation (DCT) or a conceptually similar transformation, to the residual block, producing a video block comprising residual transformation coefficient values. Transformation module 52 can perform other transformations, such as those defined by the H.264 standard, that are conceptually similar to DCT. The transformations 5 wavelet transformations, integer transformations, subband transformations, or other types of transformations can be used as well. In any case, transformation module 52 applies the transformation to the residual block, producing 1111 residual block of transformation coefficients. The transformation can convert the residual information from a pixel value domain into a transformation domain, such as a frequency domain. The quantization unit 54 quantizes the residual transformation coefficients to further reduce the bit rate. The quantization process can reduce the bit depth associated with some or all of the coefficients. The degree of quantization can be changed by adjusting a quantization parameter. After quantization, the entropy coding unit 56 entropy codes the quantized transformation coefficients. For example, the entropy coding unit 56 can perform variable adaptive content length coding (CAVLC), context adaptive binary arithmetic coding (CABAC), or another entropy coding technique. Following entropy coding by entropy coding unit 56, the encoded video can be transmitted to another device or archived for later transmission or retrieval. In the case of context adaptive binary arithmetic coding, the context can be based on neighboring blocks and / or block sizes. In some cases, the entropy coding unit 56 or other video encoder unit 20 can be configured to perform other coding functions, in addition to entropy coding and intra-prediction mode coding as described above. For example, entropy coding unit 56 can be configured to determine the encoded block pattern (CBP) values for blocks and partitions. In addition, in some cases, the entropy coding unit 56 can perform length coding, operation of the coefficients in a macro block or partition thereof. In particular, the entropy coding unit 56 can apply a zigzag scan or other scanning pattern to digitize the transformation coefficients into a macro. block or partition and code lengths equal to zero for further compression. The entropy encoding unit 56 can also construct header information with syntax elements suitable for transmission in the encoded video bit stream. The inverted quantization unit 58 and the inverted transformation module 60 apply inverted quantization and inverted transforination, respectively, to reconstruct the residual block in the pixel domain, for example, for future use as a reference block. The motion compensation unit 44 can calculate a reference block by adding the residual block to a forecast block of one of the frames of the reference frame store 64. The motion compensation unit 44 can also apply one or more filter filters. interpolation to the reconstructed residual block to calculate the sub-integer pixel values for use in motion estimation. Adder 62 adds the residual block "75 / 11p rebuilt to the co-compensated motion prediction block produced by the motion compensation unit 44 to produce a reconstructed video bit for storage in the reference frame store 64. The reconstructed video block 5 can be used by the unit motion estimation 42 and motion compensation unit 44 as a reference block for inter-encoding a block in a subsequent video frame. In this way, video encoder 20 represents an example of a video coder configured to determine a more likely first intraspection mode and a more likely second intraspection mode for a current block of video data. based on a coding context.> for the current block; select a table of code words based on the context for the current block, where the table of code words comprises a pIurality of code words corresponding to the modified intra-forecast mode indexes UE correspond to the intra-forecast modes in addition to the first most likely intra-forecast mode and second most likely intra-forecast mode; encode the act.l block using one of the intra-prediction modes in addition to the first most likely intra-prediction mode and the second most likely intra-prediction mode; determine one of the modified irradiation prediction indexes that corresponds to one of the intra prediction inputs using the code word table; and to code a code word from the selected code word table by performing a CABAC process, where the code word corresponds to one of the modified intra-prediction mode indexes. In this way, video encoder 20 also represents an example of a video encoder configured to determine a first q 76/115 a g '* · NF. · W: most likely prediction and a second most likely intra-prediction mode for a current block of video data based on a coding context for current obloco ;. select a codeword table based on the context for the current block, where the codeword table comprises a plurality of codewords corresponding to the codeword indices, where the codeword indices are mapped for intra-forecast modes; encode the current block using one of the 10 intra-prediction modes in addition to the first most likely intra-prediction mode and the second most likely intra-prediction mode; determine a modified codeword index based on the codeword index of one of the intra-prediction modes used to encode the current block 15, a codeword index mapped to the most likely first mode, and an index of code word mapped to the second most likely node; and to code a code word from the selected code word table by performing a CABAC process, where code word 20 corresponds to one of the modified intra-prediction mode indicators. Fig. 4 is a block diagram illustrating an example of video decoder 30, which decodes an encoded video sequence. In the example in Figure 4, the video decoder 30 includes an entropy decoding unit 70, the motion compensation unit 72, the intra-prediction module 74, the inverted quantization unit 76, the inverted transformation unit 78 , memory 82 and adder 80. Video decoder 30 may, in some instances, perform a decoding pass generally alternating with respect to The encoding passage-n described in relation to video encoder 20 (figure 2). The compensation unit of '";} will be ,, motion 72 can generate prediction data based on the motion vectors received from the entropy decoding unit 70. The motion compensation unit 72 can use motion vectors received in the bit stream to identify a prediction block in the reference frames in the reference frame store 82. The intra-prediction mode 74 can use the intra-prediction modes predictions received in the bit stream to form a forecast block from spatially adjacent blocks. In particular, video decoder 30, in the example of figure 4, includes configuration data 84. Configuration data 84 is substantially similar to configuration data 66 in figure 2, in that configuration data 84 includes information describing contexts for the intra-predicted blocks, in addition to one of a plurality of intra-prediction index mapping tables for use for each context, one of a plurality of modified intra-prediction mode index tables (or mapping in code word) to use for each context, and a more likely intra-prediction mode for each context. The entropy decoding unit 70 can receive a codeword representative of an intra-prediction mode for use to decode an encoded block of video data. The entropy decoding unit 70 can determine a context for the encoded block, for example. example, based on the modes of intra-prediction to a left neighbor block and neighbor above the coded block and / or a size for the coded block. Based on the context, the entropy decoding unit 70 can determine one or more intraspection modes most likely to be used in decoding the block, in addition to an intra-prediction index table and a modified intra-prediction mode index table for use in determining the actual intra-prediction mode for use in decoding the block. 5 When using a single most likely intra-prediction mode, if the codeword comprises a first bit, eg '0', then the entropy decoding unit 70 can determine that the actual intra-prediction mode it is the most likely intra-prediction mode for the coded block. Otherwise, the entropy decoding unit 70 can determine a modified intra-prediction mode index based on the received codeword, based on the modified intra-prediction mode index table for the context of the coded block. LetIrlos n represent the modified intra-prediction mode index, and let n represent the mode index for the most likely intra-prediction mode. when n <m, the entropy decoding unit 70 can determine that the actual intra-prediction mode for the coded block has a mode index of n. Otherwise, (that is, when n Z m), the entropy decoding unit 70 can determine that the actual intra-prediction mode for the coded block has a mode index of n + 1. Using the index of mode, which is equal to n or n + l as described above, the entropy decoding unit 70 can retrieve information indicating the actual intra-prediction mode for use in decoding the encoded block and send a mode indication to the intra mode -prevision 74. When using more than one most likely intra-prediction mode, such as two most likely intra-prediction modes, if a first bit has a certain value, for example, '0', then the unit of entropy decoding 70 may determine that the actual intra-prediction mode is one of the most likely intra-prediction modes for the coded block. In such cases, based on a second bit or series of bits, the entropy decoding unit 70 can determine which of the most likely intra-prediction modes is the selected intra-prediction mode. Otherwise, following the first bit, the entropy decoding unit 70 can determine a modified intra-prediction mode index based on the received code word and based on the modified intra-prediction mode index, determines the mode of intra-forecast selected for the block. As an example, let n represent the modified intra-forecast mode index, and let m and m2 represent the mode indexes for the most likely intra-forecast modes. If n <m, then the entropy decoding unit 70 can determine that c) the intra-prediction mode selected for the coded block has a mode index equal to n. When n + 1 <rrl2 (but n is not less than mi), then the entropy decoding unit 70 can determine that the intra-prediction mode selected for the coded block has a mode index of n + 1. Otherwise , when n + 1 is not less than m2, then the entropy decoding unit 70 can determine that the intra-prediction mode selected for the coded block has a mode index of n + 2. Using the mode index , the entropy decoding unit 70 can retrieve the information indicating the intra-prediction mode for use for decoding the coded block and send a mode indication to the intra-prediction module 74. Likewise, if the mode indexes intra-prediction are mapped into codeword indices and more likely to be used, if a first bit or series of bits has a certain value, for example, '0', then the entropy decoding unit 70 can determine that i mode Actual intra-prediction is one of the most likely intra-prediction modes for the coded block. In such cases, based on a second bit or series of bits, "the decoding unit 70 can determine which of the most likely intra-prediction modes is the selected intra-prediction mode. Otherwise, following the first bit or series of bits, the entropy decoding unit 70 can determine a modified codeword index mapped to the received codeword, and based on the modified codeword index, determines the intra-prediction mode selected for the block. As an example, let Cmod represent the modified codeword index, and let C, n, and C, n2 represent the codeword indices for the most likely intra-prediction modes. If Cmod <Cm1, then the unit entropy decoding method 70 can determine that the actual selector intra-prediction mode for the coded block has a codeword index equal to Cmod. When Cmod + 1 <Cm2 (but Cmod is not less than CmI), then the unit 70 po entropy decoding to determine that the actual selected intra-prediction node for the coded block has a code word index of Cmod + 1. Otherwise, when Cmod + 1 is not less than Cm2, then the entropy decoding unit 70 can determine .> that the actual selected intraspection mode for the coded block has a codeword index of Ç ,,,, d + 2. Using the codeword index, the entropy decoding unit 70 can retrieve the information indicating the actual selected intra-prediction mode to use for decoding the coded block and send an indication of the mode to the intra-prediction module 74. The intra-prediction module 74 can use the intra-prediction mode indication to intra-predict the encoded bj-hollow, for example, using pixels from neighboring blocks, previously decoded. For examples in which the block is the coded inter-prediction mode, the motion compensation unit 72 can receive information by defining a motion vector in order to retrieve the compensated motion forecast data for the coded block. In any case, the movement compensation unit 72 or intra-prediction module 74 can provide information by defining a forecast block for the adder 80. The inverse quantization unit 76 quantizes in an inverted way, that is, it quantifies the coefficients block numbers supplied in the bit stream and decoded by the entropy decoding unit 70. The reverse quantization process may include a conventional process, for example, as defined by the H.264 decoding standard or as performed by the HEVC Test Model. The inverse quantization process may also include the use of a quantization parameter QPy calculated by the encoder 20 for each macroblock to determine a degree of quantization and, likewise, a degree of inverted quantization that must be applied. Inverse transformation module 58 applies an inverse transformation, for example, an inverted DCT, an inverted integer transformation, or a conceptually similar inverted transformation process, to the transformation coefficients to produce residual blocks in the pixel domain . The motion compensation unit 72 produces blocks of compensated motion, possibly performing interpolation based on the interpolation filters. The identifiers for the interpolation filters to be used for subpixel precision motion estimation can be included in the syntax elements. The motion compensation unit 72 may use interpolation filters as used by the video encoder 20 during the encoding of the video block to calculate the interpolated values for the sub-integer pixels of a reference block. The motion compensation unit 72 can determine the interpolation filters used by the video encoder 20 according to the syntax information received and use the interpolation filters to produce forecast blocks. The motion compensation unit 72 uses some syntax information to determine the block sizes used to encode the frames of the encoded video sequence, partition information that describes how each block of a frame or slice of the encoded video sequence is divided, modes indicating how each partition is encoded, one or more frames of reference (and frames of reference frames) for each inter-encoded block or partition, and another information for decoding the encoded video sequence. The adder 80 adds the residual blocks with the corresponding forecast blocks generated by the movement compensation unit 72 or intra-forecast module 74 to form the decoded blocks. If desired, an unlock filter can also be applied to filter the decoded blocks in order to remove the blocking artifacts. The decoded video blocks are then stored in the reference frame store 82, which provides the reference blocks for subsequent motion compensation and also produces decoded video for presentation on a display device (such as a display device 32 in figure 1 ). Thus, the video decoder 30 of figure 4 represents an example of a video decoder configured to determine a more likely first intraspection mode and a more likely second intraspection mode 5 for an encoded block of video data. based on a context for the current block; select a table of code words based on the context for the current block, where the table of code words comprises a plurality of code words corresponding to the modified intra-forecast mode indexes corresponding to the intra-forecast modes in addition to the first probable intra- prediction mode and the second most probable intra prediction mode; perform a CABAC process to determine a received code word; determine one of the niodified intra-prediction mode indexes that corresponds to the received code word using the code word table; select an intra-prediction mode in addition to the first probable intra-prediction mode and the second most likely intra-prediction mode for use in decoding the coded block, where the selected intra-prediction mode corresponds to one of the indices determined among the indices modified intra-forecast mode; and decodes the current block using the selected intraspection mode. In this way, the video decoder 30 of figure 4 also represents an example of a video decoder configured to determine a more likely first intraspection mode and a more likely second intraspection mode for a current block of video data. based on 'a context for the current block; select a tab, code words based on the context for the current block, where the code words table comprises a plurality of code words corresponding to a code word index, where the code word indexes are mapped for intra-forecast modes; perform a CABAC process to determine a code word received; determine a modified codeword index that matches the codeword 5 received using the code word table; select an intra-prediction mode in addition to the most likely first intra-prediction mode and the second most likely intra-prediction mode for use in decoding the encoded block, where the most likely intra-prediction mode corresponds to a keyword index code selected based on the modified code word index, the first most likely intra-forecast mode, and the second most likely intra-forecast mode; and decode the current block using the selected intra-prediction mode. Figure 5A is a block diagram illustrating an example of a CABAC 50A coding unit that can be used according to the techniques described in that description. The CABAC 50A coding unit includes a 51A binary value mapping module, a 53A context designation module, and an adaptive arithmetic coding module 55A. The 55A adaptive arithmetic coding module includes a 57A probability estimation module and a 59A coding engine. The CABAC 50A coding unit can, for example, be considered a part of the entropy coding unit 56 of figure 2. b For a non-binary value syntax element, the binary value mapping module 51A can assign a syntax element value to a binary sequence, also referred to as a "binary sequence" which can comprise one or more bits or "binaries". In other words, the value mapping module for torque 51A can "binarize" the value of the syntax element, so that the value is represented using the torque sequence. It should be noted that a binary string of arbitrary value can receive any particular non-binary syntax element and that the binary string does not necessarily represent the value of the syntax element in binary form. In the examples of Tables 2, 4 and 6, described above, the code words provided are already in binary form and, therefore, can be used for binarization. Mapping a syntax value to a binary code word essentially "binarizes" the syntax value before the syntax element is passed to the binary mapping module 51A. For non-binary syntax elements, however, the 51A binary mapping module can binarize the syntax element. As previously mentioned, see that each of the tables 2, 4 and 6 are already represented as binary values, that is, they are already binarized, the code words can go beyond this binarization process and proceed to the context modeling stage CABAC performed by the context designation module 53A described below. Similarly, the one or more binaries described above with reference to Tables 1, 3 and 5, indicating whether an intra-prediction mode used is a more likely intra-prediction mode for a particular context, can also be represented as binary values, and therefore may not need to be binarized for the same reasons mentioned above. In other examples, modified intraspection indices, modified codeword indices, other syntax elements, and indications of whether an intraspection mode used is a more likely intraspection mode for a context in private, they can have binary values, so they can use binarization. The context designation module 53A designates a context for each binary of the binary sequence 5 used to represent the syntax element. For example, the context designation module 53A can designate a different context for each binary within the sequence of binaries. Alternatively, the context designation module 53A can designate a common context for one or more binaries in the binary sequence. In some other cases, for example, when a binary is encoded using a CABAC override mode, no explicit context is required and the context designation module 53A may not need to assign any context to the binary. In any case, in some examples, each context can be represented using a context index. In other words, each binary in the sequence of binaries can be associated with a context index that indicates the particular context assigned to the respective binary. The context assignment module 53A can perform context assignment for each binary within the sequence of binaries using a process sometimes referred to as "context modeling". For example, the context assignment module 53A can assign each binary to a given context based on a context model. The context model determines how a particular context is calculated for a given binary. for example, the context can be calculated based on the information available for the binary, such as, for example, values of previously corresponding coded syntax elements for the neighboring blocks of the video data, or a relative position of the binary within the sequence of binary. For example, the correction model may use modified intra-prediction mode index values (or code words used to represent the indexes) for neighboring blocks of video data at the top 5 and left of the current block, and / or a position of the binary within the sequence of binaries, to calculate the context. In some examples, each context may include a plurality of "context states", where each of the context states is associated with a particular set of probability estimates that indicate a probability of a binary to which the context is assigned comprising a given value, for example, "0" or "1". In addition, each context can be associated with a particular current state of correctness at any given time, where the current state of state indicates the most current probability estimates for that context. The torque of the torque sequence can subsequently be encoded by the adaptive arithmetic coding module 55A. To encode the binary, the probability estimation module 57A of the adaptive arithmetic coding module 55A can determine the probability estimates for the binary being encoded based on the context (and its current state) assigned to the binary. The 59A encoding engine can use the binary value, and the probability estimates corresponding to the context (and its current state) assigned to the binary, as records in the adaptive arithmetic encoding module 55A when encoding the binary. The probability estimates are determined for the binary by the probability estimation module 5A using the designated context, as described above. As previously described, these probability estimates generally correspond to the probability that the binary has a value equal to "0" or a value equal to "1". The 5 probability estimates can be the same for the binaries assigned to a context and may differ between contexts, as reflected by the atmal context state of each of the contexts. Additionally, the probability estimates for the designated context can be updated based on the actual value of the torque being encoded by the 59A encoding engine. For example, if a particular binary has a value of "1", then the probability estimates of "1" "for the designated context are increased. Similarly, if the binary has a value of "0", then the probability estimates of "0" for the designated context are increased. In the examples described above, the probability estimates for the designated context can be updated by updating the context state to reflect the most current probability estimates for the context, as previously described. For example, the most current probability estimates indicated by the updated context state can be used to encode a subsequent binary for which the same context is selected. The technique described above can be repeated for each torque in the torque sequence. In some cases, an override mode can be used for one or more binary strings in the binary sequence, in which case one or more binaries are encoded without using an explicitly designated context model, which can simplify and speed up encoding of the binaries. For example, for one or more binaries encoded using override mode, almost uniform probability estimates (for example, "0.5" having a value of "0" and "1") can be considered. In other words, the override mode can be used to encode evenly distributed binaries 5. The CABAC process described above should represent an example of a CABAC process. It is contemplated that the modifications in the process described above, in addition to alternative CABAC processes in relation to the one described above, are within the scope of the techniques described in that description. Additionally, the techniques of this description additionally contemplate the use of other encoding processes by adaptive context entropy such as the Probability Interval Partition Entropy (PIPE) encoding processes, in addition to other encoding processes by adaptive context entropy. . Figure 5B is a block diagram showing an example of a CABAC 50B decoding unit that can be used according to the techniques described in that description. the CABAC 50B decoding unit includes a binary to value 51B mapping module, a 53B context designation module, and an adaptive arithmetic decoding module 55B. The adaptive arithmetic decoding module 55B includes a 57B probability estimation module and a 59B decoding engine. The CABAC 50B decoding unit can, for example, be found as part of the entropy decoding unit m 70 of figure 4. In an alternate manner similar to that described above with reference to the CABAC 50A coding unit, the decoding unit CABAC 50B can decode a coded binary sequence comprising one or more binaries. As previously described, the encoded binary sequence can represent a syntax element of binary or non-binary encoded value. For example, the context designation module 53B can use context modification to determine a context that should be assigned to a particular binary in the binary sequence. The 57B probability estimation module can use the designated context (and its current state) to determine the probability estimates used to decode the binary. In addition, the 59B decoding engine can decode the torque using probability estimates. In a manner similar to that described above, this process can be repeated for some or all of the binaries in the binary sequence, resulting in the encoded binary sequence. Finally, the binary to 51B value mapping module can map the decoded binaries from the binary sequence to a non-binary value syntax element, or "de-bin" the one or more decoded binaries. Again, the CABAC process described above should represent an example of a CABAC process. It is contemplated that the changes in the process described above, in addition to CABAC processes alternative to those described above, are within the scope of the techniques described in that description. Additionally, the techniques of this description additionally contemplate the use of other coding processes by adaptive context entropy, such as PIPE processes, in addition to other coding processes by adaptive context entropy. Figure 6 is a flowchart illustrating an illustrative method for the intra-prediction encoding of a block of video data. The techniques in Figure 6 can generally be performed by any processing unit or processor, implemented by hardware, software, firmware or a combination thereof, and when implemented in software or firmware, corresponding hardware can be provided to execute the 5 instructions for software or firmware. For purposes of illustration, the techniques of figure 6 are described with respect to video encoder 20 (figures 1 and 2), although it is understood that other devices can be configured to perform similar techniques. In addition, the steps illustrated in figure 6 can be performed in a different order or in parallel, and additional steps can be added and certain steps omitted, without departing from the techniques of that description. Initially, the video encoder 20 can select an intra mode - forecast for a current block of video data (100). For example, the intra-prediction module 46 can calculate the rate distortion values for the various intra-prediction modes used to encode the location, and then select the intra-prediction mode displaying the same value distortion value. rate of tested intra-forecast modes. The intra-prediction module 46 can then encode the block using the selected intra-prediction mode (102). That is, the intra-forecast module 46 can calculate a forecast block for the block based on the selected intra-forecast mode. The video encoder 20 can additionally calculate a difference between the forecast block and the original block to produce a residual block, that video encoder 20 which it can then transform and quantize. The video encoder 20 can additionally encode the information representing the selected intraspection mode. That is, the anti-prediction module 46 can send an indication of the selected intra-prediction mode to the entropy coding unit 56. The entropy coding unit 56, or another unit of the video encoder 20, can determine a context for block (104). The context for the block can include a block size and / or intra-prediction modes of the neighboring t blocks, such as a neighbor block above and / or a left neighbor block. The entropy coding unit 56 may also select a modified intra-prediction mode index table for use in coding the intra-prediction mode indicator based on the coding context for the block (106). The entropy coding unit 56 may additionally select an intra-prediction index table, in some examples, while in others external, the intra-prediction mode indexes may be fixed. The entropy coding unit 56 can further determine one or more of the most likely intra-prediction modes for the block context (108). The entropy coding unit 56 can then select a code word for the intra-prediction mode from the inodified intra-prediction youth index table based on the most likely intra-prediction modes (110). For example, as discussed in more detail below, the entropy coding unit 56 may use a single bit or a series of bits (for example, a single bit or two bits) to signal that the selected intraspection mode comprises a of the most likely intra-forecast modes. If the selected intra-prediction mode is not one of the most likely intra-prediction modes, the entropy coding unit 56 may select a codeword to signal c) selected intra-prediction mode. The entropy coding unit 56 can then send the coded block (e.g., coded quantized transformation coefficients) to the bit stream, and using a CABAC process, it can send the selected codeword to the bit stream (112) . 5 Figure 7A is a flow chart illustrating an illustrative method for selecting a codeword indicative of an intra-prediction mode for an encoded block. Again, the techniques of figures 7A are discussed with respect to the example of video encoder 20 for example purposes. Figure 7A generally provides additional details for step 110 in figure 6. The steps of the method illustrated in figure 7A can be performed in a different order or in parallel, and additional steps can be added and certain steps omitted, without departing from the techniques that description. f Video encoder 20 can determine an encoding context for a current block (120), as discussed above. Likewise, video encoder 20 can select a modified intra-prediction mode index table based on an encoding context for the block (122A). The configuration data of the video encoder 20 can provide an indication of the modified intra-prediction mode index table, and in some instances, an intra-prediction mode index table, for the block context. In addition, video encoder 20 can determine a most likely intra-prediction mode for use in block coding based on the coding context for the block (124A). Again, the configuration data of video encoder 20 can provide an indication of the most likely intra-prediction mode for the bloc-hollow context. As discussed above, video encoder 20 can select an intra-prediction mode for the block, for use in actual block coding (126A). the video encoder 20 can determine whether the selected intra-prediction mode is the same as the most likely intra-prediction mode for the block, based on the context of block 5 (128A). If the selected mode is the most likely mode ("YES" on the 128A branch), video encoder 20 can, based on the most likely mode, encode an indication of the intra-prediction mode used to encode the block using a single bit, for example, '0' or '1' (130A). When the selected mode is not the most likely mode ("NO '" at 128A), video encoder 20 can determine a mode index for the intra-prediction inode (122A), for example, from a table of intra-forecast mode index. In some examples, the mode indexes can be global values regardless of the context, while in other examples, the configuration data of the video encoder 20 can map each context into one of a plurality of intra-prediction mode index tables. The video encoder 20 can additionally determine a mode index for the most likely intra-prediction mode. The video encoder 20 can then determine whether the mode index for the selected intra-prediction mode is less than the mode index for the most likely intra-prediction mode in the context for the block (134A). When the mode index for the selected intra-prediction mode is lower than the mode index for the most likely intra-prediction mode ("YES" branch of 134A), video encoder 20 can determine a code word to from the modified intra-prediction mode index table for the block context corresponding to the mode index for the intra-prediction mode selected forecast. More particularly, the video encoder 20 can send, using a CABAC process, the codeword mapped in a modified intra-prediction mode index equal to the mode index for the selected intra-prediction mode (136A). On the other hand, when the mode index for the selected intra-prediction mode is greater than the mode index for the most likely intra-prediction mode ("NO" branch of 134A), video encoder 20 can determine a codeword from the intra-prediction mode index table modified for q block context corresponding to one time less than the mode index for the selected intra-prediction mode. More particularly, video encoder 20 can send, using a CABAC process, the codeword mapped to a modified intra-prediction mode index equal to once less than the mode index for the selected intra-prediction mode (138A). since the most likely intra-prediction mode is flagged separately, the modified intra-prediction mode index table does not need to map an additional codeword in index to the most likely intra-prediction mode. Therefore, the modified intra-forecast mode index equal to the mode index for the intra-forecast mode most likely prediction can be mapped into index so that it is once larger than the mode index for the most likely intra-forecast mode, in that way. Thus, if there are K intra-prediction modes available for the block, the modified intra-prediction mode index table only needs to provide code words for modified K intra-prediction mode indexes, in addition to the single bit indicating whether the most likely intra-prediction mode is used to code the block. Figure 7B is a flow chart illustrating an illustrative method for selecting a code word indicative of an intra-prediction mode for a coded block. Again, the techniques of figure 7B can be implemented 5 on any suitable processor, although the techniques of figure 7B are discussed in relation to the example of video encoder 20 for example purposes. Figure 7B generally provides additional details for step 110 in figure 6, for cases where two most likely modes are used. The steps of the method illustrated in figure 7B can be performed in a different order or in parallel, and additional steps can be added and certain steps omitted, without departing from the techniques of that description. .> Video encoder 20 can determine an encoding context for a current block (120B), as discussed above. Likewise, video encoder 20 can select a modified intra-prediction mode index table based on an encoding context for the block (122B). The configuration data of the video encoder 20 can provide an indication of the modified intra-prediction mode index table, and in some instances, an intra-prediction mode index table, for the block context. In addition, video encoder 20 can determine a more likely first intra-prediction mode and a more likely second intra-prediction mode to encode the block based on the encoding context for the block (124B). Again, the video encoder configuration data 20 can provide an indication of the most likely intra-prediction modes for the block context. As discussed above, video encoder 20 can select an intra-prediction mode for the block, for use in the actual block encoding (126B). The video encoder 20 can determine whether the selected intra-prediction mode is the same as one of the most likely intra-prediction modes for the block, based on the context of the block (128B). If the selected node is a most likely mode 5 ("YES" at 128B), then video encoder 20 can encode, based on the most likely modes, an indication of the intra-prediction mode used to encode the block using a series initial bit, such as two bits that include a first bit to indicate that the real mode is one of the most likely modes and a second bit to indicate which of the most likely modes is the real mode (130B). When the selected mode is not one of the most likely modes (“NO” branch of 128B), video encoder 20 can determine a mode index for the selected intra-prediction mode (122B), for example, from a Intra-forecast mode index table. In some examples, mode indexes can be global values regardless of context, while in other examples, video encoder configuration data 20 can map each context into a plurality of intra-prediction mode index tables. The video encoder 20 can further determine the mode indexes for the most likely intra-prediction modes. The video encoder 20 can then determine whether the mode index for the selected intra-prediction mode is less than the mode indices for the first most likely intra-prediction mode and the second most likely intra-prediction mode in the context. 'to the block (134B). When the mode index for the selected intra-prediction mode is less than the mode indices for both of the most likely intra-prediction modes (134B "YES" branch), video encoder 20 can determine a code word to from the modified intra-forecast index table for the block context corresponding to the mode index for the selected intra-forecast mode. More particularly, video encoder 20 can use code word 5 mapped to the modified intra-prediction mode index equal to the mode index for the selected intra-prediction mode (136B). On the other hand, when the mQdo index for the selected intra-prediction mode is not less than the mode indices for both of the most likely intra-prediction modes (“NO” branch of 134B), the video encoder 20 can then determine whether the mode index for the selected intra-prediction mode is greater than or equal to the mode indices for the first most likely intra-prediction mode and the second most likely intra-prediction mode in the context for the block (138B ). When the mode index for the selected intra-prediction mode is greater than or equal to the mode indices for both most likely intra-prediction modes (138B "YES" branch), the encoder 'of video 20 can determine a code word from the modified intra-prediction mode index table for the block context corresponding to the mode index for the selected intra-prediction mode. More particularly, the video encoder 20 can determine a codeword from the modified intra-prediction mode index table for the block context corresponding to less than two than the mode index for the selected intra-prediction mode . More particularly, the video encoder 20 can send, using a CABAC process, the password code mapped to the modified intra-forecast mode index equal to twice less than the mode index for the selected intra-forecast mode (140B). When the mode index for the intra-forecast mode is not less than the mode indexes for both of the most likely intra-forecast modes ("NO" branch of 134B) and when the mode index for the mode selected intra-prediction is not equal to or greater than the mode indices for both most likely intra-prediction modes ("NO" branch in 138B), then the mode index for the selected intra-prediction mode is greater than or equal to the mode index for the first most likely intra-prediction mode, but less than the mode index for the second most likely intra-prediction mode, video encoder 20 can determine a codeword from the index table of intra-forecast mode modified for q context of the block corresponding to the 15. Mode index for the selected intra-forecast mode. More particularly, the video encoder 20 can send, using a CABAC process, the codeword mapped to the modified intra-prediction mode index equal to once less than the mode index for the selected intra-prediction mode ( 142B). since a first bit or second bit is used to signal the most likely modes as described above, the modified intra-prediction mode index table does not need to overlap additional index words in the index for the most intraspection modes likely. Of this forine, if there are K intra-prediction modes available for the block, the modified intra-prediction mode index table only needs to provide code words for modified K-2 intra-prediction mode indexes. Figure 8 is a flowchart illustrating an illustrative method for the intra-prediction decoding of a block of video data. The techniques in figure 8 can generally be performed by any processing unit or processor, whether implemented in hardware, software, firmware or a combination thereof, and when implemented in software or firmware, the corresponding hardware 5 can be provided to execute the instructions for software or firmware. For illustrative purposes, the techniques of figure 8 are described with respect to video decoder 30 (figures 1 and 4) although it can be understood that other devices can be configured to perform similar techniques. In addition, the steps illustrated in figure 8 can be performed in a different order or parallel errt, and additional steps can be added and certain steps omitted, without departing from the teachings of that description. The video decoder 30 can determine a code word for an intra-prediction mode encoded block using a CABAC process (150). The code word can generally represent the intra-prediction mode used to encode the block and, likewise, the intra-prediction mode to be used to decode the block. The video decoder 30 can determine an encoding context for the block in a similar way to the video encoder 20 (152), for example, based on a block size and / or block prediction modes neighbors, such as a neighboring block above and / or a neighboring block to the left. The video decoder 30 can additionally select a modified intra-prediction mode index table for the block based on the coding context determined for the block (154). The video decoder 30 can also, in some examples, determine an intra-prediction mode index table in context, while in other examples, the intra- forecasting can be fixed and applied to all contexts globally. The video decoder 30 can additionally determine one or more intra-prediction modes most likely for the context of the block (156). The video decoder 30 can then determine a real intra-prediction mode for use in decoding the block using the selected codeword table, the most likely intra-prediction modes and the received codeword (158). For example, if the codeword comprises a single bit or series of bits indicating whether the selected mode is the most likely mode, then the video decoder 30 can use the single bit or series of bits to determine whether the most likely predictions should be used to decode the block. If the selected mode is determined not to be a more likely mode, then video decoder 30 can determine a modified intra-prediction mode index based on the codeword, using the modified intra-prediction mode index table and based on the modified intra-prediction mode index, the video decoder 30 being able to determine the intra-prediction mode used to encode the block. The video decoder 30 can use the intra-prediction mode determined to decode the block (160). for example, video decoder 30 can calculate a prediction block for the block using the given intra-prediction mode. The video decoder 30 can additionally receive encoded quantized transformation coefficients, that video decoder 30 which can decode, quantize inverted, and transform inverted to reconstruct a residual block for the block. The video decoder 30 can then add the forecast block and the residual block to form a decoded block. The video decoder 30 can send the decoder block (162), which can include one or both of the sending of the decoded video block to a display device for display (for example, through a when storage), and store a copy of the decoded block in a reference frame store for use as a reference block when decoding subsequent blocks of video data, for example, in time-separated frames or slices. Figure 9A is a flow chart illustrating an illustrative method for determining an intra-prediction mode for a block using a received code word indicative of the intra-prediction mode for a coded block. Again, the techniques of figure 9a can be implemented on any suitable processor, although the techniques of figure 9A are discussed with respect to the video decoder example 30 for purposes of example and explanation. Figure 9A generally provides additional details for step 160 in figure 8. The steps of the method illustrated in figure 9A can be performed in a different order or in parallel, and additional steps can be added and certain steps omitted, without departing from the techniques that description. The video decoder 30 can determine a code word for an intra-coded block using a CABAC process (170A). As discussed above, the video decoder 30 can determine an encoding context for the block (172A), for example, based on a block size and / or intra-prediction encoding modes of the Úizinhos blocks. Based on the given context, the video decoder 30 can select a modified intra-prediction mode index table for the block (174A), and determine a most likely intra-prediction mode for the block (176A). In some examples, video decoder 30 may additionally select an intra-prediction mode index table for block 5 based on the given context. The video decoder 30 can determine whether a first bit in the codeword indicates that the selected intraspection mode is the most likely mode. If the selected intraspection mode is the most likely mode ("YES" branch of 178A), the video decoder 30 can decode the block using the most likely intraspection mode (180A). On the other hand, if the selected intra-prediction mode is an intra-prediction mode in addition to the most probable anode (178A "NO" branching), then video decoder 30 can determine a modified intra-prediction mode index (MIPM) based on the code word from the selected modified intra-prediction mode index table (182A). The video decoder 30 can then determine whether the modified intra-prediction mode index is less than the mode index for the most likely intra-prediction mode for the block context (184A). If c) modified intra-prediction mode index is less than the mode index for the most likely intra-prediction mode ("YES" branch of 184A), video decoder 30 can decode the block using the intra mode - forecast having a mode index that is equal to the modified intra-forecast mode index (186A). On the other hand, if the modified intra-prediction mode index is greater than or equal to the mode index for the most likely intra-prediction mode ("NO" branch of 184A), video decoder 30 can decode the block using the intra-forecast mode having an index so that it is equal to one time greater than the modified intra-forecast mode index (188A). Fig. 9B is a flow chart illustrating an illustrative method for determining an intra-prediction mode for a block using a received code word indicative of the intra-prediction mode for a coded block. Again, the techniques of figure 9B can be implemented on any suitable processor, although the techniques of figure 9B are discussed with respect to the D video decoder example 30 for purposes of example and explanation. Figure 9B generally provides additional details for step 160 of figure 8, in cases where more than one way is more likely to be used. The steps of the method illustrated in figure 9B can be performed in a different order or in parallel, and additional steps can be added and certain steps omitted, without departing from the techniques of that description. > The video decoder 30 can determine a code word for an intra-coded block using a CABAC process (170B). As discussed above, the video decoder 30 can determine an encoding context for the block (172B), for example, based on a block size and / or intra-prediction encoding nodes of the neighboring blocks. Based on the given context, the video decoder 30 can select a modified intra-prediction mode index table for the block (174B), and determine a most likely intra-prediction mode for the block (176B). In some instances, the video decoder 30 may additionally select an intra-prediction mode index table for the block based on the given context. The video decoder 30 can determine whether a first bit or series of bits in the code word indicates that the selected intra-prediction mode is one of the most likely modes. If the selected mode is one of the most likely modes ("YES" branch at 178B), then video decoder 30 can decode block 5 using the most likely non-predictive modes (180B). The video decoder 30 can, for example, receive a second bit or series of bits to indicate which of the most likely modes is the selected mode. On the other hand, if the first bit or series of bits indicates that the selected mode is not one of the most likely modes ("NO" branch in 178B), the video decoder 30 can determine an intra mode index - modified forecast (MIPM) based on the code word from the selected modified intra-forecast index table (182B). The video decoder 30 can then determine whether the modified intra-prediction wodo index is less than the mode index for the first probable intra-prediction mode for the block context (184B). As explained earlier, the mode index for the first most likely mode is considered to be less than the mode index for the second most likely mode. Therefore, if the modified intra-forecast mode index is less than the mode index for the first most likely intra-forecast mode, it is also less than the mode index for the second most likely intra-forecast mode. . If the modified intraspection mode index is less than the mode index for the most likely first intraspection mode ("YES" branch at 184B), then video decoder 30 can decode the block using the intra-prediction having a mode index that is equal to the modified intra-prediction mode index (186B). If the modified intraspection mode index is not less than the mode index for the most likely first intraspection mode ("NO" in 184B branch) ", then video decoder 39 can determine whether the modified intra-prediction mode plus one is less than the mode index for the second intra-prediction mode most likely 5 for the block context (188B) .If the modified intra-prediction mode index plus one is less than the mode index for the second most likely intra-prediction mode for the block context (188B "YES" branch), then video decoder 30 can decode the block using the intra-prediction mode having a mode index which is equal to one time greater than the modified intra-prediction mode index (190B). If the modified intra-prediction mode index is not less than the mode index for the second most likely intra-prediction mode ( "NO" ramification at 188B), then the video decoder 30 can dec odify the block using the intra-prediction mode having a mode index that is equal to two more than the modified intra-prediction mode index (192B). Although the method of figures 6, 7A, 7B, 8, 9A and 9B have been illustrated with respect to the mapping of intra-prediction mode indices modified to anode indices, it must be understood that the underlying techniques of the endo methods be used to map modified word-index indices to code-word indices, and vice versa, as described above with respect to the examples in Tables 5 and 6. Figure 10 is a conceptual diagram illustrating a given example of configuration of data 250, which indicates the relationships between an intra-prediction mode index table 200, a modified intra-prediction mode index table 210, and context data 220. Configuration data 250 can generally correspond to configuration 66 (figure 2) or configuration data 84 (figure 4). In addition, the configuration data describing most likely contexts, tables and intra-prediction modes must be the same in both the encoder and decoder 5 for a given bit stream. In the example of figure 10, the intra-prediction mode index table 200 includes a set of intra-prediction modes 202, -202k (intra-prediction modes 202) and corresponding indices 2041 - 204k. Although only one intra-prediction mode index table 200 is illustrated for purposes of explanation and example, it should be understood that configuration dadQs 250 may include a range of intra-prediction mode index tables similar to the table intraspection mode index tables 200. The intraspection mode index tables do not all need to be the same size, as the number of intraspection modes available for a block may depend on the block size, such as discussed above with respect to, for example, Table 5. Indices 204 can also be referred to as intra-prediction mode indices or simply as mode indices. The modified intra-prediction mode index table 210 includes indexes 212, -212k-, in addition to code words 214, -2l4K-l. Thus, the modified intra-forecast index table 210 comprises a smaller record (K-1) than the intra-mode index table 200 (K). As discussed above, the most likely intra-prediction mode can be indicated using a single bit or series of bits, rather than one of the code words 214. Therefore, intra-prediction modes in addition to the intra-prediction mode most likely prediction can be represented by one of the code words 214. Again, although only a modified intra-prediction mode index table is illustrated in the example in figure 10, it should be understood that configuration data 250 can include a plurality of modified intra-prediction mode index tables. In addition, the number of modified intra-prediction mode index tables need not necessarily be equal to the number of intra-mode indica tables. In some examples, there may be a many-to-one relationship between intra-mode index tables and intra-prediction modQ index tables, so that the same mode index table can correspond to one or more tables intra-mode index. Additionally, configuration data> 250 includes context data 220, which includes a plurality of context records similar to context record 222A. In this example, the context record 222A includes a most likely intra-mode indicator 224A, the intra-prediction mode index table identifier 226A, the modified intra-prediction mode index table identifier 228A, and the data of block context 230A. Block context data 230A may include information indicating the blocks to which context record 222A applies. For example, block context data 230A may include information describing one or more block sizes to which context record 222A applies, as well as intra-prediction modes for neighboring blocks of blocks where context record 222A is located. applies. As an example, the block context data for one of the context registers 222 may indicate that the context register corresponds to blocks having 16X16 pixels where the neighboring block above is encoded using a horizontal intra-prediction mode and where a left neighboring block is also encoded using the horizontal intra-prediction mode. The most likely intra-mode indicator - 224A, in this example, it indicates the 202m intra-forecast mode. In some examples, configuration data 250 may specify a single bit code word for use to represent that a block is encoded using the most likely intra-prediction mode. Thus, for blocks having contexts corresponding to the 230A block context data, c) the most likely intra-forecast mode is the 202m intra-forecast mode, in this example. since the intra-prediction mode 202 ,, is the most likely intra-prediction mode for the context record 222A, the intra-prediction mode 202m does not need to be mapped to one of the code words 214 in the index table of modified intra-prediction mode 210, and thus there may be a keyword smaller code in the modified intra-prediction mode index table 210 than in intra-prediction modes 202 in the intra-mode index table 200. In addition, the 204 mode indices that are smaller than the 204m mode index, this that is, mode indexes 2041-204,4-1, in this example, are mapped to modified intra-forecast mode indexes of the same value 212 from the modified intra-forecast mode index table 210. For example, the index mode 2042 is mapped in modified intraspection mode index 2122, in this example, due to the mode index 2042 being less than the mode index 204m Thus, when video encoder 20 encodes a block having a context defined by block content data 230A using the intra-prediction mode 2022, the video encoder 20 can signal the intra-prediction mode for the block using the codename 2142. Likewise, when the video decoder 30 receives the code word 2142 for a possible block Using a context defined by the block context data 230A, the video decoder '30 can determine that the intra-prediction mode used to encode the block (and, likewise, the intra-prediction mode to be used to decode the block) comprises the 2022 intra-prediction mode 5. Similarly, the 202m-1 intra-prediction mode is mapped in 2l4M- code word, due to the 204m-i "mode index being mapped into 212m-1 modified intra-forecast mode index. On the other hand, the 204 mode indices that are larger than the 204m mode index, that is, the 204m-1-204k mode indices, in this example, are mapped into modified intra-prediction mode indices 212 that are once less than the modcm index For example, the 204K-l mode index is mapped to a modified 212k-2 intra-prediction mode index, in this example, because the 204K-l mode index is greater than the mode index 204 ,,. Thus, when the video encoder 20 encodes a block having a context defined by the block context data 230A using the intra-prediction mode 202k-1, the video encoder 20 can signal the intra-prediction mode for the block using the code word 214k-2. Likewise, when the video decoder 30 receives the codeword 214k-2 for ülll block having a context defined by the context data of block 230A, the video decoder 30 can determine that the intra-prediction mode used to encode the block '(and similarly, the intra-prediction mode to be used to decode the block) comprises the intra-prediction mode 202k-1. Similarly, the 202m- intra-prediction mode is mapped to the code word 214m, due to the 204M + 1 mode index being mapped to a modified 2l2M intra-prediction mode index. Thus, according to an intra-prediction mode mapped in index of mode j, the video encoder 20 can determine a codeword, for intra-prediction modes al-are the most likely modes, using the 5 next step function f (j) f where m represents the mode index for the most likely intra-prediction node, and code word (n) represents the code word assigned to the modified intra-prediction mode index n : f Ó ") = jcodeword Ü), j <m codeword Ú '- 1), j> m (1) Similarly, according to a code for a selected intra-prediction mode that is not a more likely mode, video decoder 30 can determine an intra-prediction mode mapped to a code using the step function g (n), where m represents the mode index for the most likely intra-forecast mode and mode (j) refers to the intra-coding mode mapped in the mode index j: g (n) - lmode (n), n <m (2) mode (n + 1), n> m When these concepts are extended to the examples where two most likely inodes are used, according to an intra-coding mode mapped to the mode index j, the video encoder 20 can determine a codeword using the following function step f (j), where ITlj represents the mode index for the first most likely intra-forecast mode, m2 represents the mode index for the second most likely intra-forecast mode, and the code word (n) represents the word - code assigned to the modified intra-forecast mode index n: f j) {"° dewordÜ), j <m1andm2 ('codeword (j - 1), m1 <j <m2 (3) codewordçj - 2), j> ml and m2 Similarly, according to a code word, the video decoder 30 can determine an intra-prediction mode mapped to a code word using the following function step g (n), where m represents the 5 index of mode for a first most likely intra-forecast mode, m2 represents the index of inode for a second most likely intra-forecast mode, and mode (j) refers to the intra-forecast mode mapped in index of mode j: g (n) = {mode (n + mode (n). 1), n-F1n <ml <m2 (4) mode (n + 2). otherwíse In one or more examples, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, functions can be stored in or transmitted as one or more instructions or code in a computer-readable medium and executed by a hardware-based processing unit. Computer-readable media may include computer-readable storage media, which corresponds to a tangible medium such as data storage media, or communication media including any medium that facilitates the transfer of a computer program from one place to another, for example, according to a communication protocol. In this way, computer-readable media can generally correspond to (1) tangible computer-readable storage media that is non-transitory or (2) a means of communication such as a signal or carrier wave. The data storage medium can be any available medium that can be accessed by one or more computers or one or more processors for retrieving instructions, code and / or data structures to implement the techniques described in this description. A computer program product may include a computer-readable medium. By way of example, and not limitation, such a computer-readable storage medium may comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage, or other magnetic storage devices, flash memory , or any other means that can be used to store the desired program code in the form of instructions or data structures and that can be accessed by a computer. In addition, any connection is properly called a computer-readable medium. For example, if instructions are transmitted from a network site, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL) or wireless technologies such as infrared, radio and microwave, then coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio and microwave are included in the definition of medium. It should be understood, however, that computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other transient media, but are instead directed to tangible storage media non-transitory. Floppy disk and disk, used here, include compact disk (CD), laser disk, optical disk, digital versatile disk (DVD), floppy disk and blu-ray disk, where floppy disks normally reproduce data magnetically, while disks reproduce data optically with lasers. The combinations of the above should also be included within the scope of computer-readable media. Instructions can be executed by one or more processors, such as one or more digital signal processors (DSPS), general purpose microprocessors, application specific integrated circuits (ASICS), 5 field programmable logic sets (FPGAs), or other sets discrete or integrated logic circuit '" equivalent. Accordingly, the term" processor "as used here may refer to any of the above structures or any other suitable structure for implementing the techniques described here. Additionally, d in some respects , the functionality described here can be provided within dedicated hardware and / or software modules configured to encode and decode, or incorporated into a combined codec, moreover, the techniques can be fully implemented in one or more circuits or logic elements. The techniques in this description can be implemented on a wide variety of devices or devices, including a wireless device, an integrated circuit (IC) or a set of ics (for example, a chip set). Various components, modules or units are described in this description to emphasize the functional aspects and devices configured to carry out the described techniques, but do not necessarily require realization by different hardware units. Instead, as described above, several units can be combined into one codec hardware unit or provided by a collection of interoperable hardware units, including one or more processors as described above, in conjunction with suitable software and / or firmware . Several examples have been described. These and other examples are within the scope of the appended claims.
权利要求:
Claims (15) [1] 1. Method of decoding video data, the method comprising: determining a first most likely intraspection mode and a second most likely intraspection mode for a current block of video data based on a context for the current block; perform a context-based adaptive binary arithmetic encoding process, CABAC, to determine a received code word, where the received code word corresponds to a modified intra-prediction mode index; determine an intra-prediction mode index, j, by comparing the modified intra-prediction mode index to an intra-prediction mode index for the first most likely mode, m1, and an intra-prediction mode index for the second most likely mode, m2; select based on a mapping of the indices of intra-prediction mode in intra-prediction modes, an intra-prediction mode in addition to the first most likely intra-prediction mode and the second most likely intra-prediction mode to decode the current block , in which the selected intra-forecast mode corresponds to the determined intra-forecast mode index; and decode the current block using the selected intra-prediction mode. [2] 2. Method according to claim 1, further comprising determining the context for the current block based at least in part on the intra-prediction modes for at least one block neighboring the current block and a neighboring block above the current block and preferably further comprising determining the context for the current block based at least in part on a size of the current block and in particular each of the intra-prediction modes corresponding to a respective intra-prediction mode index. [3] Method according to claim 2, in which determining the intra-prediction mode index comprises determining the modified intra-prediction mode index plus one is greater than or equal to a mode index for the first mode most likely intra-prediction and greater than or equal to a mode index for the second most likely intra-prediction mode, and where selecting the intra-prediction mode comprises selecting the intra-prediction mode corresponding to an mode that is twice as high as the modified intra-prediction mode index. [4] Method according to claim 2, wherein determining the intra-prediction mode index comprises determining the modified intra-prediction mode indexes are less than a mode index for the first most likely intra-prediction mode and less than a mode index for the second most likely intra-forecast mode, and where selecting the intra-forecast mode comprises selecting the intra-forecast mode having a mode index equal to the intra-forecast mode index modified. [5] A method according to claim 2, wherein determining the intra-prediction mode index comprises determining the modified intra-prediction mode plus one is greater than or equal to a mode index for the first intra-prediction mode. prediction more likely and less than a mode index for the second most likely intra-prediction mode, and where selecting the intra prediction mode comprises selecting the intra prediction mode corresponding to an index that is once again higher than the modified intra-forecast mode index. [6] 6. The method of claim 1, further comprising: determining more than two most likely intra-prediction modes. [7] 7. Video decoding apparatus, comprising: mechanisms for determining a most likely first intra-prediction mode and a most likely second intra-prediction mode for a current block of video data based on a context for the current block; mechanisms for carrying out a context-based adaptive binary arithmetic coding process, CABAC, to determine a received code word, where the received code word corresponds to a modified intra-prediction mode index; mechanisms for determining an intra-prediction mode index, j, by comparing the modified intra-prediction mode index to an intra-prediction mode index for the first most likely mode, m1, and an intra-mode index - forecast for the second most likely mode, m2; mechanisms for selecting based on a mapping of the indices from intra-prediction mode to intra-prediction modes, an intra-prediction mode in addition to the first most likely intra-prediction mode and the second most likely intra-prediction mode to decode the current block, where the selected intra-prediction mode corresponds to the determined intra-prediction mode index; and mechanisms for decoding the current block using the selected intra-prediction mode. [8] Apparatus according to claim 7, wherein the video decoder is additionally configured to determine the context for the current block based at least in part on intra-prediction modes for at least 5 µm from a left neighbor block to the current block and a neighboring block above the current block and preferably where the video decoder is additionally configured to determine the context for the current block based at least in part on a size of the current block and in particular each of the intra-prediction modes correspond to a respective mode index. [9] Apparatus according to claim 7, wherein when the determined index of the modified intra-prediction mode indexes plus one is greater than or equal to a mode index for the first most likely intra-prediction mode and greater than or equal to a mode index for the second most likely intra-prediction mode, the mechanisms for selecting the intra-prediction mode comprise mechanisms for selecting the intra-prediction mode corresponding to a mode index that is twice higher than the modified intra-forecast mode index. [10] Apparatus according to claim 8, wherein when the determined mode index of the modified intra-prediction mode indices is less than a mode index for the first most probable and less than an index of intra-prediction mode so for the second most likely intra-prediction mode, the mechanisms for selecting the intra-prediction mode comprise mechanisms for selecting the intra-prediction mode having a mode index equal to the modified intra-prediction mode index. [11] Apparatus according to claim 8, wherein when the determined index of the modified intra-prediction mode indexes plus one is greater than or equal to a mode index for the first most likely intra-prediction mode and less than a mode index for the second most likely intra-prediction mode, the mechanisms for selecting the intra-prediction mode comprising mechanisms for selecting the intra-prediction mode corresponding to a mode index that is once higher than the modified intra-forecast mode index. [12] Apparatus according to claim 7, further comprising: mechanisms for determining more than two most likely intra-prediction modes. [13] 13. Video data encoding method, the method comprising: determining a first most likely intra-prediction mode, m1, and a second most likely intra-prediction mode, m2, for a current block of video data based on in a coding context for the current block; encode the current block using an intra-prediction mode in addition to the first probable intra-prediction mode and the second most likely intra-prediction mode; determining an intra-forecast mode index for one of the intra-forecast modes had been the most likely first intra-forecast mode and the second most likely intra-forecast mode; determining a modified intra-prediction mode index by comparing the intra-prediction mode index to one of the intra-prediction modes was the first most likely intra-prediction mode and the second most likely intra-prediction mode to a intra-forecast mode index for the first most likely intra-forecast mode, m1, and an intra-forecast mode index for the second most likely intra-forecast mode, m2; selecting from a table of code words comprising a plurality of code words corresponding to the modified intra-prediction mode indexes, a code word corresponding to the modified intra-prediction index; and encode the code word from the code word table by performing a context-based adaptive binary coding process, CABAC. [14] 14. Apparatus for encoding video data, the apparatus comprising: mechanisms for determining a first most likely intra-prediction mode, m1, and a second most likely intra-prediction mode, m2, for a current video data block based on a coding context for the current block; mechanisms for encoding the current block using one of the intra-forecast modes in addition to the first most likely intra-forecast mode, m1, and the second most likely intra-forecast mode, m2; mechanisms for determining an intra-forecast mode index for one of the intra-forecast modes had been the most likely first intra-forecast mode and the second most likely intra-forecast mode; mechanisms for determining a modified intra-prediction mode index by comparing the intra-prediction mode index to one of the intra-prediction modes outside the most likely first intra-prediction mode and the second most likely intra-prediction mode an intra-forecast mode index for the first most likely intra-forecast mode, m1, and an intra-forecast mode index for the second most likely intra-forecast mode, m2; mechanisms for selecting from a table of code words comprising a plurality of code words corresponding to the modified intra-prediction mode indexes, a code word 5 corresponding to the modified intra-prediction index; and mechanisms to encode the code word from the code word table by carrying out a context-based adaptive binary coding process, CABAC. [15] 15. Computer-readable storage medium having instructions stored in it that, when executed, cause one or more processors to carry out the method according to any one of claims 1 to 6 or 13.
类似技术:
公开号 | 公开日 | 专利标题 BR112013017423A2|2020-09-01|indication of intra-prediction mode selection for video encoding using cabac EP2622864B1|2018-09-05|Indicating intra-prediction mode selection for video coding JP5960309B2|2016-08-02|Video coding using mapped transform and scan mode KR20190007427A|2019-01-22|Neighbor-based signaling of intra prediction modes KR101752989B1|2017-07-03|Mode decision simplification for intra prediction KR101618021B1|2016-05-03|Video coding using a subset of intra prediction modes and corresponding directional transforms ES2705746T3|2019-03-26|Initialization of states and context probabilities for adaptive entropy coding in context KR101632776B1|2016-06-22|Joint coding of syntax elements for video coding BR112013013650B1|2021-03-23|METHOD, DEVICE AND MEDIA LEGIBLE BY COMPUTER TO ENCODE COEFFICIENTS ASSOCIATED WITH A VIDEO DATA BLOCK DURING A VIDEO ENCODING PROCESS BR122020003135B1|2021-07-06|METHOD AND DEVICE FOR DECODING VIDEO DATA AND COMPUTER-READABLE NON- TRANSIENT STORAGE MEDIA KR101632130B1|2016-06-20|Reference mode selection in intra mode coding US20120163448A1|2012-06-28|Coding the position of a last significant coefficient of a video block in video coding BR112021015212A2|2021-09-28|REGULAR ENCODED BIN REDUCTION FOR COEFFICIENT ENCODING USING THRESHOLD
同族专利:
公开号 | 公开日 DK2661890T3|2018-11-19| CN103299628B|2016-10-05| AU2012204302B2|2015-05-28| JP5731013B2|2015-06-10| CN103299628A|2013-09-11| RU2013136381A|2015-02-20| ES2692387T3|2018-12-03| CA2823948C|2015-09-29| US8913662B2|2014-12-16| KR20130121932A|2013-11-06| MY164378A|2017-12-15| HUE039795T2|2019-02-28| IL226974A|2017-05-29| RU2554545C2|2015-06-27| EP2661890B1|2018-07-25| KR101518157B1|2015-05-06| US20120177118A1|2012-07-12| SI2661890T1|2018-10-30| CA2823948A1|2012-07-12| AU2012204302A1|2013-07-18| WO2012094506A1|2012-07-12| JP2014506067A|2014-03-06| SG191201A1|2013-07-31| EP2661890A1|2013-11-13|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 KR0155784B1|1993-12-16|1998-12-15|김광호|Adaptable variable coder/decoder method of image data| KR970009408B1|1994-01-18|1997-06-13|대우전자 주식회사|Inter/intra table selection circuit| US6765964B1|2000-12-06|2004-07-20|Realnetworks, Inc.|System and method for intracoding video data| AU2003243595A1|2000-12-06|2004-01-23|Realnetworks, Inc.|Intra coding video data methods and apparatuses| US7978765B2|2002-03-22|2011-07-12|Realnetworks, Inc.|Context-adaptive macroblock type encoding/decoding methods and apparatuses| JP4130780B2|2002-04-15|2008-08-06|松下電器産業株式会社|Image encoding method and image decoding method| US7170937B2|2002-05-01|2007-01-30|Texas Instruments Incorporated|Complexity-scalable intra-frame prediction technique| WO2003105070A1|2002-06-01|2003-12-18|Nokia Corporation|Spatial prediction based intra coding| US7289674B2|2002-06-11|2007-10-30|Nokia Corporation|Spatial prediction based intra coding| US7194137B2|2003-05-16|2007-03-20|Cisco Technology, Inc.|Variable length coding method and apparatus for video compression| JP2007043651A|2005-07-05|2007-02-15|Ntt Docomo Inc|Dynamic image encoding device, dynamic image encoding method, dynamic image encoding program, dynamic image decoding device, dynamic image decoding method, and dynamic image decoding program| US7778472B2|2006-03-27|2010-08-17|Qualcomm Incorporated|Methods and systems for significance coefficient coding in video compression| US8565314B2|2006-10-12|2013-10-22|Qualcomm Incorporated|Variable length coding table selection based on block type statistics for refinement coefficient coding| JP2008199100A|2007-02-08|2008-08-28|Toshiba Corp|Device for decoding variable length code| US7535387B1|2007-09-10|2009-05-19|Xilinx, Inc.|Methods and systems for implementing context adaptive binary arithmetic coding| BRPI0818444A2|2007-10-12|2016-10-11|Qualcomm Inc|adaptive encoding of video block header information| NO328295B1|2007-12-20|2010-01-25|Tandberg Telecom As|VLC method and device| US8891615B2|2008-01-08|2014-11-18|Qualcomm Incorporated|Quantization based on rate-distortion modeling for CABAC coders| WO2009094349A1|2008-01-22|2009-07-30|Dolby Laboratories Licensing Corporation|Adaptive motion information cost estimation with dynamic look-up table updating| US8761253B2|2008-05-28|2014-06-24|Nvidia Corporation|Intra prediction mode search scheme| US7932843B2|2008-10-17|2011-04-26|Texas Instruments Incorporated|Parallel CABAC decoding for video decompression| KR101507344B1|2009-08-21|2015-03-31|에스케이 텔레콤주식회사|Apparatus and Method for intra prediction mode coding using variable length code, and Recording Medium therefor| EP2514210A4|2009-12-17|2014-03-19|Ericsson Telefon Ab L M|Method and arrangement for video coding| KR101904948B1|2010-04-09|2018-10-08|엘지전자 주식회사|Method and apparatus for processing video data| US20120106640A1|2010-10-31|2012-05-03|Broadcom Corporation|Decoding side intra-prediction derivation for video coding|JP5732454B2|2009-07-06|2015-06-10|トムソン ライセンシングThomson Licensing|Method and apparatus for performing spatial change residual coding| US9716886B2|2010-08-17|2017-07-25|M&K Holdings Inc.|Method for restoring an intra prediction mode| MX2013006339A|2011-01-07|2013-08-26|Mediatek Singapore Pte Ltd|Method and apparatus of improved intra luma prediction mode coding.| WO2012134246A2|2011-04-01|2012-10-04|엘지전자 주식회사|Entropy decoding method, and decoding apparatus using same| KR101876173B1|2011-06-17|2018-07-09|엘지전자 주식회사|Method and apparatus for encoding/decoding video in intra prediction mode| US8929455B2|2011-07-01|2015-01-06|Mitsubishi Electric Research Laboratories, Inc.|Method for selecting transform types from mapping table for prediction modes| US9363511B2|2011-09-13|2016-06-07|Mediatek Singapore Pte. Ltd.|Method and apparatus for Intra mode coding in HEVC| WO2013062194A1|2011-10-24|2013-05-02|인터앱|Method and apparatus for generating reconstructed block| US9154796B2|2011-11-04|2015-10-06|Qualcomm Incorporated|Intra-mode video coding| US9088796B2|2011-11-07|2015-07-21|Sharp Kabushiki Kaisha|Video decoder with enhanced CABAC decoding| US20130114686A1|2011-11-08|2013-05-09|Sharp Laboratories Of America, Inc.|Video decoder with enhanced cabac motion vector decoding| JP2013126093A|2011-12-14|2013-06-24|Sony Corp|Image processing apparatus and image processing method| WO2013106986A1|2012-01-16|2013-07-25|Mediatek Singapore Pte. Ltd.|Methods and apparatuses of intra mode coding| AU2012200319B2|2012-01-19|2015-11-26|Canon Kabushiki Kaisha|Method, apparatus and system for encoding and decoding the significance map for residual coefficients of a transform unit| AU2012200345B2|2012-01-20|2014-05-01|Canon Kabushiki Kaisha|Method, apparatus and system for encoding and decoding the significance map residual coefficients of a transform unit| WO2014005924A1|2012-07-05|2014-01-09|Thomson Licensing|Video coding and decoding method with adaptation of coding modes| US9503723B2|2013-01-11|2016-11-22|Futurewei Technologies, Inc.|Method and apparatus of depth prediction mode selection| US9426473B2|2013-02-01|2016-08-23|Qualcomm Incorporated|Mode decision simplification for intra prediction| US9148667B2|2013-02-06|2015-09-29|Qualcomm Incorporated|Intra prediction mode decision with reduced storage| US9369708B2|2013-03-27|2016-06-14|Qualcomm Incorporated|Depth coding modes signaling of depth data for 3D-HEVC| US9516306B2|2013-03-27|2016-12-06|Qualcomm Incorporated|Depth coding modes signaling of depth data for 3D-HEVC| US10904551B2|2013-04-05|2021-01-26|Texas Instruments Incorporated|Video coding using intra block copy| US9641853B2|2013-04-15|2017-05-02|Futurewei Technologies, Inc.|Method and apparatus of depth prediction mode selection| US9787989B2|2013-06-11|2017-10-10|Blackberry Limited|Intra-coding mode-dependent quantization tuning| RU2666635C2|2013-10-14|2018-09-11|МАЙКРОСОФТ ТЕКНОЛОДЖИ ЛАЙСЕНСИНГ, ЭлЭлСи|Features of base colour index map mode for video and image coding and decoding| CN105765974B|2013-10-14|2019-07-02|微软技术许可有限责任公司|Feature for the intra block of video and image coding and decoding duplication prediction mode| WO2015054813A1|2013-10-14|2015-04-23|Microsoft Technology Licensing, Llc|Encoder-side options for intra block copy prediction mode for video and image coding| CA2924501C|2013-11-27|2021-06-22|Mediatek Singapore Pte. Ltd.|Method of video coding using prediction based on intra picture block copy| US10390034B2|2014-01-03|2019-08-20|Microsoft Technology Licensing, Llc|Innovations in block vector prediction and estimation of reconstructed sample values within an overlap area| EP3090553A4|2014-01-03|2017-12-20|Microsoft Technology Licensing, LLC|Block vector prediction in video and image coding/decoding| US10542274B2|2014-02-21|2020-01-21|Microsoft Technology Licensing, Llc|Dictionary encoding and decoding of screen content| US10368091B2|2014-03-04|2019-07-30|Microsoft Technology Licensing, Llc|Block flipping and skip mode in intra block copy prediction| US9455743B2|2014-05-27|2016-09-27|Qualcomm Incorporated|Dedicated arithmetic encoding instruction| WO2015192353A1|2014-06-19|2015-12-23|Microsoft Technology Licensing, Llc|Unified intra block copy and inter prediction modes| WO2015200822A1|2014-06-26|2015-12-30|Huawei Technologies Co., Ltd|Method and device for reducing a computational load in high efficiency video coding| CN105874795B|2014-09-30|2019-11-29|微软技术许可有限责任公司|When wavefront parallel processing is activated to the rule of intra-picture prediction mode| US9591325B2|2015-01-27|2017-03-07|Microsoft Technology Licensing, Llc|Special case handling for merged chroma blocks in intra block copy prediction mode| CN106664405B|2015-06-09|2020-06-09|微软技术许可有限责任公司|Robust encoding/decoding of escape-coded pixels with palette mode| FR3051309A1|2016-05-10|2017-11-17|Bcom|METHODS AND DEVICES FOR ENCODING AND DECODING A DATA STREAM REPRESENTATIVE OF AT LEAST ONE IMAGE| CN108737841B|2017-04-21|2020-11-24|腾讯科技(深圳)有限公司|Coding unit depth determination method and device| US10560723B2|2017-05-08|2020-02-11|Qualcomm Incorporated|Context modeling for transform coefficient coding| US10630978B2|2017-05-12|2020-04-21|Blackberry Limited|Methods and devices for intra-coding in video compression| US10484695B2|2017-10-23|2019-11-19|Google Llc|Refined entropy coding for level maps| US10986349B2|2017-12-29|2021-04-20|Microsoft Technology Licensing, Llc|Constraints on locations of reference blocks for intra block copy prediction| US10771781B2|2018-03-12|2020-09-08|Electronics And Telecommunications Research Institute|Method and apparatus for deriving intra prediction mode| US10645381B2|2018-04-30|2020-05-05|Google Llc|Intra-prediction for smooth blocks in image/video| CN111010577B|2018-12-31|2022-03-01|北京达佳互联信息技术有限公司|Method, device and medium for intra-frame and inter-frame joint prediction in video coding| US11172197B2|2019-01-13|2021-11-09|Tencent America LLC|Most probable mode list generation scheme|
法律状态:
2020-09-15| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]| 2020-09-24| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]| 2021-11-23| B350| Update of information on the portal [chapter 15.35 patent gazette]|
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US201161430520P| true| 2011-01-06|2011-01-06| US61/430,520|2011-01-06| US201161446402P| true| 2011-02-24|2011-02-24| US61/446,402|2011-02-24| US201161448623P| true| 2011-03-02|2011-03-02| US61/448,623|2011-03-02| US13/343,573|2012-01-04| US13/343,573|US8913662B2|2011-01-06|2012-01-04|Indicating intra-prediction mode selection for video coding using CABAC| PCT/US2012/020346|WO2012094506A1|2011-01-06|2012-01-05|Indicating intra-prediction mode selection for video coding using cabac| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|